Li, Yan, Lu, Yifei, Li, Shuren.
2021.
EZAC: Encrypted Zero-Day Applications Classification Using CNN and K-Means. 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD). :378–383.
With the rapid development of traffic encryption technology and the continuous emergence of various network services, the classification of encrypted zero-day applications has become a major challenge in network supervision. More seriously, many attackers will utilize zero-day applications to hide their attack behaviors and make attack undetectable. However, there are very few existing studies on zero-day applications. Existing works usually select and label zero-day applications from unlabeled datasets, and these are not true zero-day applications classification. To address the classification of zero-day applications, this paper proposes an Encrypted Zero-day Applications Classification (EZAC) method that combines Convolutional Neural Network (CNN) and K-Means, which can effectively classify zero-day applications. We first use CNN to classify the flows, and for the flows that may be zero-day applications, we use K-Means to divide them into several categories, which are then manually labeled. Experimental results show that the EZAC achieves 97.4% accuracy on a public dataset (CIC-Darknet2020), which outperforms the state-of-the-art methods.
Nguyen, H. M., Derakhshani, R..
2020.
Eyebrow Recognition for Identifying Deepfake Videos. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—5.
Deepfake imagery that contains altered faces has become a threat to online content. Current anti-deepfake approaches usually do so by detecting image anomalies, such as visible artifacts or inconsistencies. However, with deepfake advances, these visual artifacts are becoming harder to detect. In this paper, we show that one can use biometric eyebrow matching as a tool to detect manipulated faces. Our method could provide an 0.88 AUC and 20.7% EER for deepfake detection when applied to the highest quality deepfake dataset, Celeb-DF.
Heartfield, R., Loukas, G., Gan, D..
2017.
An eye for deception: A case study in utilizing the human-as-a-security-sensor paradigm to detect zero-day semantic social engineering attacks. 2017 IEEE 15th International Conference on Software Engineering Research, Management and Applications (SERA). :371–378.
In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.
Mufassa, Fauzil Halim, Anwar, Khoirul.
2019.
Extrinsic Information Transfer (EXIT) Analysis for Short Polar Codes. 2019 Symposium on Future Telecommunication Technologies (SOFTT). 1:1–6.
Ze the quality of channels into either completely noisy or noieseless channels. This paper presents extrinsic information transfer (EXIT) analysis for iterative decoding of Polar codes to reveal the mechanism of channel transformation. The purpose of understanding the transformation process are to comprehend the placement process of information bit and frozen bit and to comprehend the security standard of Polar codes. Mutual information derived based on the concept of EXIT chart for check nodes and variable nodes of low density parity check (LDPC) codes and applied to Polar codes. This paper explores the quality of the polarized channels in finite blocklength. The finite block-length is of our interest since in the fifth telecommunications generation (5G) the block length is limited. This paper reveals the EXIT curve changes of Polar codes and explores the polarization characteristics, thus, high value of mutual informations for frozen bit are needed to be detectable. If it is the other way, the error correction capability of Polar codes would be drastically decreases. These results are expected to be a reference for developments of Polar codes for 5G technologies and beyond.
Uddin, Mostafa, Nadeem, Tamer, Nukavarapu, Santosh.
2019.
Extreme SDN Framework for IoT and Mobile Applications Flexible Privacy at the Edge. 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom. :1–11.
With the current significant penetration of mobile devices (i.e. smartphones and tablets) and the tremendous increase in the number of the corresponding mobile applications, they have become an indispensable part of our lives. Nowadays, there is a significant growth in the number of sensitive applications such as personal health applications, personal financial applications, home monitoring applications, etc. In addition, with the significant growth of Internet-of-Things (IoT) devices, smartphones and the corresponding applications are widely considered as the Internet gateways for these devices. Mobile devices mostly use wireless LANs (WLANs) (i.e., WiFi networks) as the prominent network interface to the Internet. However, due to the broadcast nature of WiFi links, wireless traffics are exposed to any eavesdropping adversary within the WLAN. Despite WiFi encryption, studies show that application usage information could be inferred from the encrypted wireless traffic. The leakage of this sensitive information is very serious issue that will significantly impact users' privacy and security. In addressing this privacy concern, we design and develop a lightweight programmable privacy framework, called PrivacyGuard. PrivacyGuard is inspired by the vision of pushing the Software Defined Network (SDN)-like paradigm all the way to wireless network edge, is designed to support of adopting privacy preserving policies to protect the wireless communication of the sensitive applications. In this paper, we demonstrate and evaluate a prototype of PrivacyGuard framework on Android devices showing the flexibility and efficiency of the framework.
Matthew Philippe, Universite Catholique de Louvain, Ray Essick, University of Illinois at Urbana-Champaig, Geir Dullerud, University of Illinois at Urbana-Champaign, Raphael M. Jungers, Unveristy of Illinois at Urbana-Champaign.
2016.
Extremal Storage Functions and Minimal Realizations of Discrete-time Linear Switching Systems. 55th Conference on Decision and Control (CDC 2016).
We study the Lp induced gain of discretetime linear switching systems with graph-constrained switching sequences. We first prove that, for stable systems in a minimal realization, for every p ≥ 1, the Lp-gain is exactly characterized through switching storage functions. These functions are shown to be the pth power of a norm. In order to consider general systems, we provide an algorithm for computing minimal realizations. These realizations are rectangular systems, with a state dimension that varies according to the mode of the system. We apply our tools to the study on the of L2-gain. We provide algorithms for its approximation, and provide a converse result for the existence of quadratic switching storage functions. We finally illustrate the results with a physically motivated example.
Yang, Hongna, Zhang, Yiwei.
2022.
On an extremal problem of regular graphs related to fractional repetition codes. 2022 IEEE International Symposium on Information Theory (ISIT). :1566–1571.
Fractional repetition (FR) codes are a special family of regenerating codes with the repair-by-transfer property. The constructions of FR codes are naturally related to combinatorial designs, graphs, and hypergraphs. Given the file size of an FR code, it is desirable to determine the minimum number of storage nodes needed. The problem is related to an extremal graph theory problem, which asks for the minimum number of vertices of an α-regular graph such that any subgraph with k vertices has at most δ edges. In this paper, we present a class of regular graphs for this problem to give the bounds for the minimum number of storage nodes for the FR codes.
ISSN: 2157-8117
Kermani, Fatemeh Hojati, Ghanbari, Shirin.
2019.
Extractive Persian Summarizer for News Websites. 2019 5th International Conference on Web Research (ICWR). :85–89.
Automatic extractive text summarization is the process of condensing textual information while preserving the important concepts. The proposed method after performing pre-processing on input Persian news articles generates a feature vector of salient sentences from a combination of statistical, semantic and heuristic methods and that are scored and concatenated accordingly. The scoring of the salient features is based on the article's title, proper nouns, pronouns, sentence length, keywords, topic words, sentence position, English words, and quotations. Experimental results on measurements including recall, F-measure, ROUGE-N are presented and compared to other Persian summarizers and shown to provide higher performance.
Furumoto, Keisuke, Umizaki, Mitsuhiro, Fujita, Akira, Nagata, Takahiko, Takahashi, Takeshi, Inoue, Daisuke.
2021.
Extracting Threat Intelligence Related IoT Botnet From Latest Dark Web Data Collection. 2021 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing Communications (GreenCom) and IEEE Cyber, Physical Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :138—145.
As it is easy to ensure the confidentiality of users on the Dark Web, malware and exploit kits are sold on the market, and attack methods are discussed in forums. Some services provide IoT Botnet to perform distributed denial-of-service (DDoS as a Service: DaaS), and it is speculated that the purchase of these services is made on the Dark Web. By crawling such information and storing it in a database, threat intelligence can be obtained that cannot otherwise be obtained from information on the Surface Web. However, crawling sites on the Dark Web present technical challenges. For this paper, we implemented a crawler that can solve these challenges. We also collected information on markets and forums on the Dark Web by operating the implemented crawler. Results confirmed that the dataset collected by crawling contains threat intelligence that is useful for analyzing cyber attacks, particularly those related to IoT Botnet and DaaS. Moreover, by uncovering the relationship with security reports, we demonstrated that the use of data collected from the Dark Web can provide more extensive threat intelligence than using information collected only on the Surface Web.
Robles-Cordero, A. M., Zayas, W. J., Peker, Y. K..
2018.
Extracting the Security Features Implemented in a Bluetooth LE Connection. 2018 IEEE International Conference on Big Data (Big Data). :2559–2563.
Since its introduction in 2010, Bluetooth Low Energy (LE) has seen an abrupt adoption by top companies in the world. From smartphones, PCs, tablets, smartwatches to fitness bands; Bluetooth Low Energy is being implemented more and more on technological devices. Even though the Bluetooth Special Interest Group includes and strongly recommends implementations for security features in their standards for Bluetooth LE devices, recent studies show that many Bluetooth devices do not follow the recommendations. Even worse consumers are rarely informed about what security features are implemented by the products they use. The ultimate goal in this study is to provide a mechanism for users to inform them of the security features implemented in a Bluetooth LE connection that they have initiated. To this end, we developed an app for Android phones that extracts the security features of a Bluetooth LE connection using the btsnoop log stored on the phone. We have verified the correctness of our app using the Frontline BPA Low Energy Analyzer.
Chung, Wingyan, Liu, Jinwei, Tang, Xinlin, Lai, Vincent S. K..
2018.
Extracting Textual Features of Financial Social Media to Detect Cognitive Hacking. 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). :244–246.
Social media are increasingly reflecting and influencing the behavior of human and financial market. Cognitive hacking leverages the influence of social media to spread deceptive information with an intent to gain abnormal profits illegally or to cause losses. Measuring the information content in financial social media can be useful for identifying these attacks. In this paper, we developed an approach to identifying social media features that correlate with abnormal returns of the stocks of companies vulnerable to be targets of cognitive hacking. To test the approach, we collected price data and 865,289 social media messages on four technology companies from July 2017 to June 2018, and extracted features that contributed to abnormal stock movements. Preliminary results show that terms that are simple, motivate actions, incite emotion, and uses exaggeration are ranked high in the features of messages associated with abnormal price movements. We also provide selected messages to illustrate the use of these features in potential cognitive hacking attacks.
Chung, Wingyan, Liu, Jinwei, Tang, Xinlin, Lai, Vincent S. K..
2018.
Extracting Textual Features of Financial Social Media to Detect Cognitive Hacking. 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). :244-246.
Social media are increasingly reflecting and influencing the behavior of human and financial market. Cognitive hacking leverages the influence of social media to spread deceptive information with an intent to gain abnormal profits illegally or to cause losses. Measuring the information content in financial social media can be useful for identifying these attacks. In this paper, we developed an approach to identifying social media features that correlate with abnormal returns of the stocks of companies vulnerable to be targets of cognitive hacking. To test the approach, we collected price data and 865,289 social media messages on four technology companies from July 2017 to June 2018, and extracted features that contributed to abnormal stock movements. Preliminary results show that terms that are simple, motivate actions, incite emotion, and uses exaggeration are ranked high in the features of messages associated with abnormal price movements. We also provide selected messages to illustrate the use of these features in potential cognitive hacking attacks.
Staicu, C.-A., Torp, M. T., Schäfer, M., Møller, A., Pradel, M..
2020.
Extracting Taint Specifications for JavaScript Libraries. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). :198—209.
Modern JavaScript applications extensively depend on third-party libraries. Especially for the Node.js platform, vulnerabilities can have severe consequences to the security of applications, resulting in, e.g., cross-site scripting and command injection attacks. Existing static analysis tools that have been developed to automatically detect such issues are either too coarse-grained, looking only at package dependency structure while ignoring dataflow, or rely on manually written taint specifications for the most popular libraries to ensure analysis scalability. In this work, we propose a technique for automatically extracting taint specifications for JavaScript libraries, based on a dynamic analysis that leverages the existing test suites of the libraries and their available clients in the npm repository. Due to the dynamic nature of JavaScript, mapping observations from dynamic analysis to taint specifications that fit into a static analysis is non-trivial. Our main insight is that this challenge can be addressed by a combination of an access path mechanism that identifies entry and exit points, and the use of membranes around the libraries of interest. We show that our approach is effective at inferring useful taint specifications at scale. Our prototype tool automatically extracts 146 additional taint sinks and 7 840 propagation summaries spanning 1 393 npm modules. By integrating the extracted specifications into a commercial, state-of-the-art static analysis, 136 new alerts are produced, many of which correspond to likely security vulnerabilities. Moreover, many important specifications that were originally manually written are among the ones that our tool can now extract automatically.
Chawla, Nikhil, Singh, Arvind, Rahman, Nael Mizanur, Kar, Monodeep, Mukhopadhyay, Saibal.
2019.
Extracting Side-Channel Leakage from Round Unrolled Implementations of Lightweight Ciphers. 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :31–40.
Energy efficiency and security is a critical requirement for computing at edge nodes. Unrolled architectures for lightweight cryptographic algorithms have been shown to be energy-efficient, providing higher performance while meeting resource constraints. Hardware implementations of unrolled datapaths have also been shown to be resistant to side channel analysis (SCA) attacks due to a reduction in signal-to-noise ratio (SNR) and an increased complexity in the leakage model. This paper demonstrates optimal leakage models and an improved CFA attack which makes it feasible to extract first-order side-channel leakages from combinational logic in the initial rounds of unrolled datapaths. Several leakage models, targeting initial rounds, are explored and 1-bit hamming weight (HW) based leakage model is shown to be an optimal choice. Additionally, multi-band narrow bandpass filtering techniques in conjunction with correlation frequency analysis (CFA) is demonstrated to improve SNR by up to 4×, attributed to the removal of the misalignment effect in combinational logics and signal isolation. The improved CFA attack is performed on side channel signatures acquired for 7-round unrolled SIMON datapaths, implemented on Sakura-G (XILINX spartan 6, 45nm) based FPGA platform and a 24× reduction in minimum-traces-to-disclose (MTD) for revealing 80% of the key bits is demonstrated with respect to conventional time domain correlation power analysis (CPA). Finally, the proposed method is successfully applied to a fully-unrolled datapath for PRINCE and a parallel round-based datapath for Advanced Encryption Standard (AES) algorithm to demonstrate its general applicability.
Jaina, J., Suma, G. S., Dija, S., Thomas, K. L..
2015.
Extracting network connections from Windows 7 64-bit physical memory. 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). :1–4.
Nowadays, Memory Forensics is more acceptable in Cyber Forensics Investigation because malware authors and attackers choose RAM or physical memory for storing critical information instead of hard disk. The volatile physical memory contains forensically relevant artifacts such as user credentials, chats, messages, running processes and its details like used dlls, files, command and network connections etc. Memory Forensics involves acquiring the memory dump from the Suspect's machine and analyzing the acquired dump to find out crucial evidence with the help of windows pre-defined kernel data structures. While retrieving different artifacts from these data structures, finding the network connections from Windows 7 system's memory dump is a very challenging task. This is because the data structures that store network connections in earlier versions of Windows are not present in Windows 7. In this paper, a methodology is described for efficiently retrieving details of network related activities from Windows 7 x64 memory dump. This includes remote and local IP addresses and associated port information corresponding to each of the running processes. This can provide crucial information in cyber crime investigation.
Radu Vanciu, Marwan Abi-Antoun.
2013.
Extracting Dataflow Objects and other Flow Objects. Foundations of Object-Oriented Languages (FOOL) 2013.
Finding architectural flaws in object-oriented code requires a runtime architecture that shows multiple components of the same type that are used in different contexts. Previous work showed that a runtime architecture can be approximated by an abstract object graph that a static analysis extracts from code with Ownership Domain annotations. To find architectural flaws, it is not enough to reason about the presence or absence of communication. Additional work is needed to reason about the content of the communication. The contribution of this paper is a static analysis that extracts a hierarchical object graph with dataflow edges that refer to objects. The extraction analysis combines the aliasing precision provided by Ownership Domains with a domainsensitive value flow analysis. We evaluate the extraction analysis on an open-source Android application and discuss examples of dataflow edges that refer to objects that are in actual domains or to flow objects that are in domains corresponding to unique annotations.
Zhang, Ce, Shin, Jaeho, Ré, Christopher, Cafarella, Michael, Niu, Feng.
2016.
Extracting Databases from Dark Data with DeepDive. Proceedings of the 2016 International Conference on Management of Data. :847–859.
DeepDive is a system for extracting relational databases from dark data: the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data — scientific papers, Web classified ads, customer service notes, and so on — were instead in a relational database, it would give analysts access to a massive and highly-valuable new set of "big data" to exploit. DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.
Deliu, I., Leichter, C., Franke, K..
2017.
Extracting Cyber Threat Intelligence from Hacker Forums: Support Vector Machines versus Convolutional Neural Networks. 2017 IEEE International Conference on Big Data (Big Data). :3648–3656.
Hacker forums and other social platforms may contain vital information about cyber security threats. But using manual analysis to extract relevant threat information from these sources is a time consuming and error-prone process that requires a significant allocation of resources. In this paper, we explore the potential of Machine Learning methods to rapidly sift through hacker forums for relevant threat intelligence. Utilizing text data from a real hacker forum, we compared the text classification performance of Convolutional Neural Network methods against more traditional Machine Learning approaches. We found that traditional machine learning methods, such as Support Vector Machines, can yield high levels of performance that are on par with Convolutional Neural Network algorithms.
Shurui Zhou, Jafar Al-Kofahi, Tien Nguyen, Christian Kästner, Sarah Nadi.
2015.
Extracting configuration knowledge from build files with symbolic analysis. RELENG '15 Proceedings of the Third International Workshop on Release Engineering.
Build systems contain a lot of configuration knowledge about a software system, such as under which conditions specific files are compiled. Extracting such configuration knowledge is important for many tools analyzing highly-configurable systems, but very challenging due to the complex nature of build systems. We design an approach, based on SYMake, that symbolically evaluates Makefiles and extracts configuration knowledge in terms of file presence conditions and conditional parameters. We implement an initial prototype and demonstrate feasibility on small examples.
Cámara, Javier, Wohlrab, Rebekka, Garlan, David, Schmerl, Bradley.
2022.
ExTrA: Explaining architectural design tradeoff spaces via dimensionality reduction. Journal of Systems and Software. 198
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
Yexing Li, Xinye Cai, Zhun Fan, Qingfu Zhang.
2014.
An external archive guided multiobjective evolutionary approach based on decomposition for continuous optimization. Evolutionary Computation (CEC), 2014 IEEE Congress on. :1124-1130.
In this paper, we propose a decomposition based multiobjective evolutionary algorithm that extracts information from an external archive to guide the evolutionary search for continuous optimization problem. The proposed algorithm used a mechanism to identify the promising regions(subproblems) through learning information from the external archive to guide evolutionary search process. In order to demonstrate the performance of the algorithm, we conduct experiments to compare it with other decomposition based approaches. The results validate that our proposed algorithm is very competitive.
Loganathan, K., Saranya, D..
2021.
An Extensive Web Security Through Cloud Based Double Layer Password Encryption (DLPE) Algorithm for Secured Management Systems. 2021 International Conference on System, Computation, Automation and Networking (ICSCAN). :1–6.
Nowadays , cloud -based technology has been enlarged depends on the human necessities in the world. A lot of technologies is discovered that serve the people in different ways of cloud -based security and best resource allocation. Cloud-based technology is the essential factor to the resources like hardware, software for effective resource utilization . The securing applications enabled security mechanism enables the vital role for cloud -based web security through the secured password. The violation of data by the unauthorized access of users concerns many web developers and application owners . Web security enables the cloud-based password management system that illustrates the data storage and the web passwords access through the "Cloud framework". Web security, End-to-end passwords , and all the browser -based passwords could belong to the analysis of web security . The aim is to enhance system security. Thus, sensitive data are sustained with security and privacy . In this paper , the proposed Password Management via cloud-based web security gets to attain . An efficient Double Layer Password Encryption (DLPE ) algorithm to enable the secured password management system . Text -based passwords continue to be the most popular method of online user identification . They safeguard internet accounts with important assets against harmful attempts on passwords. The security of passwords is dependent on the development of strong passwords and keeping them from being stolen by intruders . The proposed DLPE algorithm perceived the double - layer encryption system as an effective security concern. When the data user accesses the user Login , the OTP generates via mail /SMS , and the original message is encrypted using public key generation. Then the text of data gets doubly encrypted through the cloud framework . The private key is used to decipher the cipher text . If the OTP gets matched , the text is to be decrypted over the text data . When double encryption happens , the detection of data flaws, malicious attacks , application hackers gets reduced and the strong password enabled double-layer encryption attained the secured data access without any malicious attackers . The data integrity , confidentiality enabled password management . The ability to manage a distributed systems policy like the Double Layer Password encryption technique enables password verification for the data used to highly secure the data or information.
Legunsen, Owolabi, Hariri, Farah, Shi, August, Lu, Yafeng, Zhang, Lingming, Marinov, Darko.
2016.
An Extensive Study of Static Regression Test Selection in Modern Software Evolution. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :583–594.
Regression test selection (RTS) aims to reduce regression testing time by only re-running the tests affected by code changes. Prior research on RTS can be broadly split into dy namic and static techniques. A recently developed dynamic RTS technique called Ekstazi is gaining some adoption in practice, and its evaluation shows that selecting tests at a coarser, class-level granularity provides better results than selecting tests at a finer, method-level granularity. As dynamic RTS is gaining adoption, it is timely to also evaluate static RTS techniques, some of which were proposed over three decades ago but not extensively evaluated on modern software projects. This paper presents the first extensive study that evaluates the performance benefits of static RTS techniques and their safety; a technique is safe if it selects to run all tests that may be affected by code changes. We implemented two static RTS techniques, one class-level and one method-level, and compare several variants of these techniques. We also compare these static RTS techniques against Ekstazi, a state-of-the-art, class-level, dynamic RTS technique. The experimental results on 985 revisions of 22 open-source projects show that the class-level static RTS technique is comparable to Ekstazi, with similar performance benefits, but at the risk of being unsafe sometimes. In contrast, the method-level static RTS technique performs rather poorly.
Wesemeyer, Stephan, Boureanu, Ioana, Smith, Zach, Treharne, Helen.
2020.
Extensive Security Verification of the LoRaWAN Key-Establishment: Insecurities Patches. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :425–444.
LoRaWAN (Low-power Wide-Area Networks) is the main specification for application-level IoT (Internet of Things). The current version, published in October 2017, is LoRaWAN 1.1, with its 1.0 precursor still being the main specification supported by commercial devices such as PyCom LoRa transceivers. Prior (semi)-formal investigations into the security of the LoRaWAN protocols are scarce, especially for Lo-RaWAN 1.1. Moreover, amongst these few, the current encodings [4], [9] of LoRaWAN into verification tools unfortunately rely on much-simplified versions of the LoRaWAN protocols, undermining the relevance of the results in practice. In this paper, we fill in some of these gaps. Whilst we briefly discuss the most recent cryptographic-orientated works [5] that looked at LoRaWAN 1.1, our true focus is on producing formal analyses of the security and correctness of LoRaWAN, mechanised inside automated tools. To this end, we use the state-of-the-art prover, Tamarin. Importantly, our Tamarin models are a faithful and precise rendering of the LoRaWAN specifications. For example, we model the bespoke nonce-generation mechanisms newly introduced in LoRaWAN 1.1, as well as the “classical” but shortdomain nonces in LoRaWAN 1.0 and make recommendations regarding these. Whilst we include small parts on device-commissioning and application-level traffic, we primarily scrutinise the Join Procedure of LoRaWAN, and focus on version 1.1 of the specification, but also include an analysis of Lo-RaWAN 1.0. To this end, we consider three increasingly strong threat models, resting on a Dolev-Yao attacker acting modulo different requirements made on various channels (e.g., secure/insecure) and the level of trust placed on entities (e.g., honest/corruptible network servers). Importantly, one of these threat models is exactly in line with the LoRaWAN specification, yet it unfortunately still leads to attacks. In response to the exhibited attacks, we propose a minimal patch of the LoRaWAN 1.1 Join Procedure, which is as backwards-compatible as possible with the current version. We analyse and prove this patch secure in the strongest threat model mentioned above. This work has been responsibly disclosed to the LoRa Alliance, and we are liaising with the Security Working Group of the LoRa Alliance, in order to improve the clarity of the LoRaWAN 1.1 specifications in light of our findings, but also by using formal analysis as part of a feedback-loop of future and current specification writing.