Visible to the public Biblio

Found 16998 results

2018-05-01
Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.

Cowart, R., Coe, D., Kulick, J., Milenković, A..  2017.  An Implementation and Experimental Evaluation of Hardware Accelerated Ciphers in All-Programmable SoCs. Proceedings of the SouthEast Conference. :34–41.
The protection of confidential information has become very important with the increase of data sharing and storage on public domains. Data confidentiality is accomplished through the use of ciphers that encrypt and decrypt the data to impede unauthorized access. Emerging heterogeneous platforms provide an ideal environment to use hardware acceleration to improve application performance. In this paper, we explore the performance benefits of an AES hardware accelerator versus the software implementation for multiple cipher modes on the Zynq 7000 All-Programmable System-on-a-Chip (SoC). The accelerator is implemented on the FPGA fabric of the SoC and utilizes DMA for interfacing to the CPU. File encryption and decryption of varying file sizes are used as the workload, with execution time and throughput as the metrics for comparing the performance of the hardware and software implementations. The performance evaluations show that the accelerated AES operations achieve a speedup of 7 times relative to its software implementation and throughput upwards of 350 MB/s for the counter cipher mode, and modest improvements for other cipher modes.
Li, Meng, Lai, Liangzhen, Chandra, Vikas, Pan, David Z..  2017.  Cross-Level Monte Carlo Framework for System Vulnerability Evaluation Against Fault Attack. Proceedings of the 54th Annual Design Automation Conference 2017. :17:1–17:6.
Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.
Farraj, Abdallah, Hammad, Eman, Kundur, Deepa.  2017.  Performance Metrics for Storage-Based Transient Stability Control. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :9–14.

In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.

Li, Huan, Guo, Chen, Wang, Donglin.  2017.  Hybrid Sorting Method for Successive Cancellation List Decoding of Polar Codes. Proceedings of the 2017 the 7th International Conference on Communication and Network Security. :23–26.
This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.
Mahdi, Fatna El, Habbani, Ahmed, Mouchfiq, Nada, Essaid, Bilal.  2017.  Study of Security in MANETs and Evaluation of Network Performance Using ETX Metric. Proceedings of the 2017 International Conference on Smart Digital Environment. :220–228.

Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.

Halunen, Kimmo, Karinsalo, Anni.  2017.  Measuring the Value of Privacy and the Efficacy of PETs. Proceedings of the 11th European Conference on Software Architecture: Companion Proceedings. :132–135.
Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.
Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
2018-04-30
Veloudis, Simeon, Paraskakis, Iraklis, Petsos, Christos.  2017.  Ontological Framework for Ensuring Correctness of Security Policies in Cloud Environments. Proceedings of the 8th Balkan Conference in Informatics. :23:1–23:8.

By embracing the cloud computing paradigm enterprises are able to boost their agility and productivity whilst realising significant cost savings. However, many enterprises are reluctant to adopt cloud services for supporting their critical operations due to security and privacy concerns. One way to alleviate these concerns is to devise policies that infuse suitable security controls in cloud services. This work proposes a class of ontologically-expressed rules, namely the so-called axiomatic rules, that aim at ensuring the correctness of these policies by harnessing the various knowledge artefacts that they embody. It also articulates an adequate framework for the expression of policies, one which provides ontological templates for modelling the knowledge artefacts encoded in the policies and which form the basis for the proposed axiomatic rules.

Nasir, Akhyari, Arshah, Ruzaini Abdullah, Ab Hamid, Mohd Rashid.  2017.  Information Security Policy Compliance Behavior Based on Comprehensive Dimensions of Information Security Culture: A Conceptual Framework. Proceedings of the 2017 International Conference on Information System and Data Mining. :56–60.

The adherence of employees towards Information Security Policy (ISP) established in the organization is crucial in reducing information security risks. Some scholars have suggested that employees' compliance to ISP could be influenced by Information Security Culture (ISC) cultivated in the organization. Several studies on the impact of ISC towards ISP compliance have proposed different dimensions and factors associated to ISC with substantial differences in each finding. This paper is discussing an enhanced conceptual framework of ISP compliance behavior by addressing ISC as a multidimensional concept which consist of seven comprehensive dimensions. These new proposed ISC dimensions developed using all the key factors of ISC in literature and were aligned with the widely accepted concept of organizational culture and ISC. The framework also integrated with the most significant behavioral theory in this domain of study, which is Theory of Planned Behavior to provide more deep understanding and richer findings of the compliance behavior. This framework is expected to give more accurate findings on the relationships between ISC and ISP compliance behavior.

Balozian, Puzant, Leidner, Dorothy.  2017.  Review of IS Security Policy Compliance: Toward the Building Blocks of an IS Security Theory. SIGMIS Database. 48:11–43.

An understanding of insider threats in information systems (IS) is important to help address one of the dangers lurking within organizations. This article provides a review of the literature on insider compliance (and failure of compliance) with information systems' policies in order to understand the status of IS research regarding negligent and malicious insiders. We begin by defining the terms, developing a new taxonomy of insiders, and then providing a comprehensive review of articles on IS policy compliance for the past 26 years. Grounding the analysis in the literature, we inductively identify four themes to foster Information Security policy compliance among employees. The themes are: 1) IS management philosophy, 2) procedural countermeasures, 3) technical countermeasures, and 4) environmental countermeasures. We propose that future research can draw upon these themes and use them as the building blocks of an indigenous IS security theory.

Kafali, Ö, Jones, J., Petruso, M., Williams, L., Singh, M. P..  2017.  How Good Is a Security Policy against Real Breaches? A HIPAA Case Study 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE). :530–540.

Policy design is an important part of software development. As security breaches increase in variety, designing a security policy that addresses all potential breaches becomes a nontrivial task. A complete security policy would specify rules to prevent breaches. Systematically determining which, if any, policy clause has been violated by a reported breach is a means for identifying gaps in a policy. Our research goal is to help analysts measure the gaps between security policies and reported breaches by developing a systematic process based on semantic reasoning. We propose SEMAVER, a framework for determining coverage of breaches by policies via comparison of individual policy clauses and breach descriptions. We represent a security policy as a set of norms. Norms (commitments, authorizations, and prohibitions) describe expected behaviors of users, and formalize who is accountable to whom and for what. A breach corresponds to a norm violation. We develop a semantic similarity metric for pairwise comparison between the norm that represents a policy clause and the norm that has been violated by a reported breach. We use the US Health Insurance Portability and Accountability Act (HIPAA) as a case study. Our investigation of a subset of the breaches reported by the US Department of Health and Human Services (HHS) reveals the gaps between HIPAA and reported breaches, leading to a coverage of 65%. Additionally, our classification of the 1,577 HHS breaches shows that 44% of the breaches are accidental misuses and 56% are malicious misuses. We find that HIPAA's gaps regarding accidental misuses are significantly larger than its gaps regarding malicious misuses.

Ismail, W. B. W., Widyarto, S., Ahmad, R. A. T. R., Ghani, K. A..  2017.  A generic framework for information security policy development. 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI). :1–6.

Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.

Li, L., Wu, S., Huang, L., Wang, W..  2017.  Research on modeling for network security policy confliction based on network topology. 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). :36–41.

The consistency checking of network security policy is an important issue of network security field, but current studies lack of overall security strategy modeling and entire network checking. In order to check the consistency of policy in distributed network system, a security policy model is proposed based on network topology, which checks conflicts of security policies for all communication paths in the network. First, the model uniformly describes network devices, domains and links, abstracts the network topology as an undirected graph, and formats the ACL (Access Control List) rules into quintuples. Then, based on the undirected graph, the model searches all possible paths between all domains in the topology, and checks the quintuple consistency by using a classifying algorithm. The experiments in campus network demonstrate that this model can effectively detect the conflicts of policy globally in the distributed network and ensure the consistency of the network security policies.

Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.

Cowart, R., Coe, D., Kulick, J., Milenković, A..  2017.  An Implementation and Experimental Evaluation of Hardware Accelerated Ciphers in All-Programmable SoCs. Proceedings of the SouthEast Conference. :34–41.

The protection of confidential information has become very important with the increase of data sharing and storage on public domains. Data confidentiality is accomplished through the use of ciphers that encrypt and decrypt the data to impede unauthorized access. Emerging heterogeneous platforms provide an ideal environment to use hardware acceleration to improve application performance. In this paper, we explore the performance benefits of an AES hardware accelerator versus the software implementation for multiple cipher modes on the Zynq 7000 All-Programmable System-on-a-Chip (SoC). The accelerator is implemented on the FPGA fabric of the SoC and utilizes DMA for interfacing to the CPU. File encryption and decryption of varying file sizes are used as the workload, with execution time and throughput as the metrics for comparing the performance of the hardware and software implementations. The performance evaluations show that the accelerated AES operations achieve a speedup of 7 times relative to its software implementation and throughput upwards of 350 MB/s for the counter cipher mode, and modest improvements for other cipher modes.

Li, Meng, Lai, Liangzhen, Chandra, Vikas, Pan, David Z..  2017.  Cross-Level Monte Carlo Framework for System Vulnerability Evaluation Against Fault Attack. Proceedings of the 54th Annual Design Automation Conference 2017. :17:1–17:6.

Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.

Farraj, Abdallah, Hammad, Eman, Kundur, Deepa.  2017.  Performance Metrics for Storage-Based Transient Stability Control. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :9–14.

In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.

Li, Huan, Guo, Chen, Wang, Donglin.  2017.  Hybrid Sorting Method for Successive Cancellation List Decoding of Polar Codes. Proceedings of the 2017 the 7th International Conference on Communication and Network Security. :23–26.

This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.

Mahdi, Fatna El, Habbani, Ahmed, Mouchfiq, Nada, Essaid, Bilal.  2017.  Study of Security in MANETs and Evaluation of Network Performance Using ETX Metric. Proceedings of the 2017 International Conference on Smart Digital Environment. :220–228.

Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.

Halunen, Kimmo, Karinsalo, Anni.  2017.  Measuring the Value of Privacy and the Efficacy of PETs. Proceedings of the 11th European Conference on Software Architecture: Companion Proceedings. :132–135.

Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.

Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.

In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.

2018-04-11
Ma, C., Guo, Y., Su, J..  2017.  A Multiple Paths Scheme with Labels for Key Distribution on Quantum Key Distribution Network. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2513–2517.

This paper establishes a probability model of multiple paths scheme of quantum key distribution with public nodes among a set of paths which are used to transmit the key between the source node and the destination node. Then in order to be used in universal net topologies, combining with the key routing in the QKD network, the algorithm of the multiple paths scheme of key distribution we propose includes two major aspects: one is an approach which can confirm the number and the distance of the selection of paths, and the other is the strategy of stochastic paths with labels that can decrease the number of public nodes and avoid the phenomenon that the old scheme may produce loops and often get the nodes apart from the destination node father than current nodes. Finally, the paper demonstrates the rationality of the probability model and strategies about the algorithm.

Tripathy, B. K., Sudhir, A., Bera, P., Rahman, M. A..  2017.  Formal Modelling and Verification of Requirements of Adaptive Routing Protocol for Mobile Ad-Hoc Network. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 1:548–556.

A group of mobile nodes with limited capabilities sparsed in different clusters forms the backbone of Mobile Ad-Hoc Networks (MANET). In such situations, the requirements (mobility, performance, security, trust and timing constraints) vary with change in context, time, and geographic location of deployment. This leads to various performance and security challenges which necessitates a trade-off between them on the application of routing protocols in a specific context. The focus of our research is towards developing an adaptive and secure routing protocol for Mobile Ad-Hoc Networks, which dynamically configures the routing functions using varying contextual features with secure and real-time processing of traffic. In this paper, we propose a formal framework for modelling and verification of requirement constraints to be used in designing adaptive routing protocols for MANET. We formally represent the network topology, behaviour, and functionalities of the network in SMT-LIB language. In addition, our framework verifies various functional, security, and Quality-of-Service (QoS) constraints. The verification engine is built using the Yices SMT Solver. The efficacy of the proposed requirement models is demonstrated with experimental results.

Medjek, F., Tandjaoui, D., Romdhani, I., Djedjig, N..  2017.  Performance Evaluation of RPL Protocol under Mobile Sybil Attacks. 2017 IEEE Trustcom/BigDataSE/ICESS. :1049–1055.

In Sybil attacks, a physical adversary takes multiple fabricated or stolen identities to maliciously manipulate the network. These attacks are very harmful for Internet of Things (IoT) applications. In this paper we implemented and evaluated the performance of RPL (Routing Protocol for Low-Power and Lossy Networks) routing protocol under mobile sybil attacks, namely SybM, with respect to control overhead, packet delivery and energy consumption. In SybM attacks, Sybil nodes take the advantage of their mobility and the weakness of RPL to handle identity and mobility, to flood the network with fake control messages from different locations. To counter these type of attacks we propose a trust-based intrusion detection system based on RPL.