Visible to the public Biblio

Found 475 results

Filters: First Letter Of Last Name is J  [Clear All Filters]
A B C D E F G H I [J] K L M N O P Q R S T U V W X Y Z   [Show ALL]
J
J, Goutham Kumar, S, Gowri, Rajendran, Surendran, Vimali, J.S., Jabez, J., Srininvasulu, Senduru.  2021.  Identification of Cyber Threats and Parsing of Data. 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI). :556–564.
One of the significant difficulties in network safety is the arrangement of a mechanized and viable digital danger's location strategy. This paper presents an AI procedure for digital dangers recognition, in light of fake neural organizations. The proposed procedure changes large number of gathered security occasions over to singular occasion profiles and utilize a profound learning-based discovery strategy for upgraded digital danger identification. This research work develops an AI-SIEM framework dependent on a blend of occasion profiling for information preprocessing and distinctive counterfeit neural organization techniques by including FCNN, CNN, and LSTM. The framework centers around separating between obvious positive and bogus positive cautions, consequently causing security examiners to quickly react to digital dangers. All trials in this investigation are performed by creators utilizing two benchmark datasets (NSLKDD and CICIDS2017) and two datasets gathered in reality. To assess the presentation correlation with existing techniques, tests are carried out by utilizing the five ordinary AI strategies (SVM, k-NN, RF, NB, and DT). Therefore, the exploratory aftereffects of this examination guarantee that our proposed techniques are fit for being utilized as learning-based models for network interruption discovery and show that despite the fact that it is utilized in reality, the exhibition beats the traditional AI strategies.
J. Brynielsson, R. Sharma.  2015.  "Detectability of low-rate HTTP server DoS attacks using spectral analysis". 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :954-961.

Denial-of-Service (DoS) attacks pose a threat to any service provider on the internet. While traditional DoS flooding attacks require the attacker to control at least as much resources as the service provider in order to be effective, so-called low-rate DoS attacks can exploit weaknesses in careless design to effectively deny a service using minimal amounts of network traffic. This paper investigates one such weakness found within version 2.2 of the popular Apache HTTP Server software. The weakness concerns how the server handles the persistent connection feature in HTTP 1.1. An attack simulator exploiting this weakness has been developed and shown to be effective. The attack was then studied with spectral analysis for the purpose of examining how well the attack could be detected. Similar to other papers on spectral analysis of low-rate DoS attacks, the results show that disproportionate amounts of energy in the lower frequencies can be detected when the attack is present. However, by randomizing the attack pattern, an attacker can efficiently reduce this disproportion to a degree where it might be impossible to correctly identify an attack in a real world scenario.

J. Chen, C. S. Gates, Z. Jorgensen, W. Yang.  2015.  Effective risk communication for end users: A multi-granularity approach. Women in CyberSecurity (WiCyS) Conference.

We proposed a multi-granularity approach to present risk information of mobile apps to the end users. Within this approach the highest level is a summary risk index, which allows quick and easy comparison among multiple apps that provide similar functionality. We have developed several types of risk index, such as text saying “High Risk” or number of filled circles (Gates, Chen, Li, & Proctor, 2014). Through both online and in-lab studies, we found that when presented the interface with the summary risk index, participants made more secure app-selection decisions. Subsequent research showed that framing of the summary risk information affects users’ app-selection decisions, and positive framing in terms of safety has an advantage over negative framing in terms of risk (Chen, Gates, Li, & Proctor, 2014).

In addition to the summary risk index, some users may also want more detailed risk information for the apps. We have been developing an intermediate-level risk display that presents only the major risk categories. As a first step, we conducted user studies to have expert users’ identify the major risk categories (personal privacy, monetary loss, and device stability) and validate the categories on typical users (Jorgensen, Chen, Gates, Li, Proctor, & Yu, 2015). In a subsequent study, we are developing a graphical display to incorporate these risk categories into the current app interface and test its effectiveness.

This multi-granularity approach can be applied to risk communication in other contexts. For example, in the context of communicating the potential risk associated with phishing attacks, an effective warning should be designed to include both higher-level and lower-level risk information: A higher-level index information about how likely an email message or website is a phishing one should be presented to users and inform them about the potential risk in an easy-to-comprehend manner; a more detailed explanation should also be available for users who want to know more about the warning and the index. We have completed a pilot study in this area and are initiating a full study to investigate the effectiveness of such an interface in preventing users from being phished successfully.

J. Choi, C. Choi, H. M. Lynn, P. Kim.  2015.  "Ontology Based APT Attack Behavior Analysis in Cloud Computing". 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA). :375-379.

Recently personal information due to the APT attack, the economic damage and leakage of confidential information is a serious social problem, a great deal of research has been done to solve this problem. APT attacks are threatening traditional hacking techniques as well as to increase the success rate of attacks using sophisticated attack techniques such attacks Zero-Day vulnerability in order to avoid detection techniques and state-of-the-art security because it uses a combination of intelligence. In this paper, the malicious code is designed to detect APT attack based on APT attack behavior ontology that occur during the operation on the target system, it uses intelligent APT attack than to define inference rules can be inferred about malicious attack behavior to propose a method that can be detected.

J. J. Li, P. Abbate, B. Vega.  2015.  "Detecting Security Threats Using Mobile Devices". 2015 IEEE International Conference on Software Quality, Reliability and Security - Companion. :40-45.

In our previous work [1], we presented a study of using performance escalation to automatic detect Distributed Denial of Service (DDoS) types of attacks. We propose to enhance the work of security threat detection by using mobile phones as the detector to identify outliers of normal traffic patterns as threats. The mobile solution makes detection portable to any services. This paper also shows that the same detection method works for advanced persistent threats.

J. Kim, I. Moon, K. Lee, S. C. Suh, I. Kim.  2015.  "Scalable Security Event Aggregation for Situation Analysis". 2015 IEEE First International Conference on Big Data Computing Service and Applications. :14-23.

Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.

J. Knight, J. Xiang, Kevin Sullivan.  2015.  Real-World Types and Their Applications. The International Conference on Computer Safety, Reliability, and Security.
J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

J. Ponniah, Y. C. Hu, P. R. Kumar.  2015.  "A clean slate design for secure wireless ad-hoc networks #x2014; Part 2: Open unsynchronized networks". 2015 13th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :183-190.

We build upon the clean-slate, holistic approach to the design of secure protocols for wireless ad-hoc networks proposed in part one. We consider the case when the nodes are not synchronized, but instead have local clocks that are relatively affine. In addition, the network is open in that nodes can enter at arbitrary times. To account for this new behavior, we make substantial revisions to the protocol in part one. We define a game between protocols for open, unsynchronized nodes and the strategies of adversarial nodes. We show that the same guarantees in part one also apply in this game: the protocol not only achieves the max-min utility, but the min-max utility as well. That is, there is a saddle point in the game, and furthermore, the adversarial nodes are effectively limited to either jamming or conforming with the protocol.

J. Qadir, O. Hasan.  2015.  "Applying Formal Methods to Networking: Theory, Techniques, and Applications". IEEE Communications Surveys Tutorials. 17:256-291.

Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.

J. Shen, S. Ji, J. Shen, Z. Fu, J. Wang.  2015.  "Auditing Protocols for Cloud Storage: A Survey". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :222-227.

So far, cloud storage has been accepted by an increasing number of people, which is not a fresh notion any more. It brings cloud users a lot of conveniences, such as the relief of local storage and location independent access. Nevertheless, the correctness and completeness as well as the privacy of outsourced data are what worry could users. As a result, most people are unwilling to store data in the cloud, in case that the sensitive information concerning something important is disclosed. Only when people feel worry-free, can they accept cloud storage more easily. Certainly, many experts have taken this problem into consideration, and tried to solve it. In this paper, we survey the solutions to the problems concerning auditing in cloud computing and give a comparison of them. The methods and performances as well as the pros and cons are discussed for the state-of-the-art auditing protocols.

J. Vukalović, D. Delija.  2015.  "Advanced Persistent Threats - detection and defense". 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1324-1330.

The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).

J. Zhang.  2015.  "Semantic-Based Searchable Encryption in Cloud: Issues and Challenges". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :163-165.

Searchable encryption is a new developing information security technique and it enables users to search over encrypted data through keywords without having to decrypt it at first. In the last decade, many researchers are engaging in the field of searchable encryption and have proposed a series of efficient search schemes over encrypted cloud data. It is the time to survey this field to conclude a comprehensive framework by analyzing individual contributions. This paper focuses on the searchable encryption schemes in cloud. We firstly summarize the general model and threat model in searchable encryption schemes, and then present the privacy-preserving issues in these schemes. In addition, we compare the efficiency and security between semantic search and preferred search in detail. At last, some open issues and research challenges in the future are proposed.

J.Y.V., Manoj Kumar, Swain, Ayas Kanta, Kumar, Sudeendra, Sahoo, Sauvagya Ranjan, Mahapatra, Kamalakanta.  2018.  Run Time Mitigation of Performance Degradation Hardware Trojan Attacks in Network on Chip. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :738—743.
Globalization of semiconductor design and manufacturing has led to several hardware security issues. The problem of Hardware Trojans (HT) is one such security issue discussed widely in industry and academia. Adversary design engineer can insert the HT to leak confidential data, cause a denial of service attack or any other intention specific to the design. HT in cryptographic modules and processors are widely discussed. HT in Multi-Processor System on Chips (MPSoC) are also catastrophic, as most of the military applications use MPSoCs. Network on Chips (NoC) are standard communication infrastructure in modern day MPSoC. In this paper, we present a novel hardware Trojan which is capable of inducing performance degradation and denial of service attacks in a NoC. The presence of the Hardware Trojan in a NoC can compromise the crucial details of packets communicated through NoC. The proposed Trojan is triggered by a particular complex bit pattern from input messages and tries to mislead the packets away from the destined addresses. A mitigation method based on bit shuffling mechanism inside the router with a key directly extracted from input message is proposed to limit the adverse effects of the Trojan. The performance of a 4×4 NoC is evaluated under uniform traffic with the proposed Trojan and mitigation method. Simulation results show that the proposed mitigation scheme is useful in limiting the malicious effect of hardware Trojan.
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
Jaatun, M. G., Moe, M. E. Gaup, Nordbø, P. E..  2018.  Cyber Security Considerations for Self-healing Smart Grid Networks. 2018 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–7.
Fault Location, Isolation and System Restoration (FLISR) mechanisms allow for rapid restoration of power to customers that are not directly implicated by distribution network failures. However, depending on where the logic for the FLISR system is located, deployment may have security implications for the distribution network. This paper discusses alternative FLISR placements in terms of cyber security considerations, concluding that there is a case for both local and centralized FLISR solutions.
Jabeen, Gul, Ping, Luo.  2019.  A Unified Measurable Software Trustworthy Model Based on Vulnerability Loss Speed Index. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :18—25.

As trust becomes increasingly important in the software domain. Due to its complex composite concept, people face great challenges, especially in today's dynamic and constantly changing internet technology. In addition, measuring the software trustworthiness correctly and effectively plays a significant role in gaining users trust in choosing different software. In the context of security, trust is previously measured based on the vulnerability time occurrence to predict the total number of vulnerabilities or their future occurrence time. In this study, we proposed a new unified index called "loss speed index" that integrates the most important variables of software security such as vulnerability occurrence time, number and severity loss, which are used to evaluate the overall software trust measurement. Based on this new definition, a new model called software trustworthy security growth model (STSGM) has been proposed. This paper also aims at filling the gap by addressing the severity of vulnerabilities and proposed a vulnerability severity prediction model, the results are further evaluated by STSGM to estimate the future loss speed index. Our work has several features such as: (1) It is used to predict the vulnerability severity/type in future, (2) Unlike traditional evaluation methods like expert scoring, our model uses historical data to predict the future loss speed of software, (3) The loss metric value is used to evaluate the risk associated with different software, which has a direct impact on software trustworthiness. Experiments performed on real software vulnerability datasets and its results are analyzed to check the correctness and effectiveness of the proposed model.

Jabrayilzade, Elgun, Evtikhiev, Mikhail, Tüzün, Eray, Kovalenko, Vladimir.  2022.  Bus Factor in Practice. 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :97—106.

Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.

Jackson, K. A., Bennett, B. T..  2018.  Locating SQL Injection Vulnerabilities in Java Byte Code Using Natural Language Techniques. SoutheastCon 2018. :1-5.

With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.

Jacob Fustos, Farzad Farshchi, Heechul Yun.  2019.  SpectreGuard: An Efficient Data-Centric Defense Mechanism against Spectre Attacks. Proceedings of the 56th Annual Design Automation Conference 2019.

Speculative execution is an essential performance enhancing technique in modern processors, but it has been shown to be insecure. In this paper, we propose SpectreGuard, a novel defense mechanism against Spectre attacks. In our approach, sensitive memory blocks (e.g., secret keys) are marked using simple OS/library API, which are then selectively protected by hardware from Spectre attacks via low-cost micro-architecture extension. This technique allows microprocessors to maintain high performance, while restoring the control to software developers to make security and performance trade-offs.

Jacob, C., Rekha, V. R..  2017.  Secured and Reliable File Sharing System with De-Duplication Using Erasure Correction Code. 2017 International Conference on Networks Advances in Computational Technologies (NetACT). :221–228.
An effective storage and management of file systems is very much essential now a days to avoid the wastage of storage space provided by the cloud providers. Data de-duplication technique has been used widely which allows only to store a single copy of a file and thus avoids duplication of file in the cloud storage servers. It helps to reduce the amount of storage space and save bandwidth of cloud service and thus in high cost savings for the cloud service subscribers. Today data that we need to store are in encrypted format to ensure the security. So data encryption by data owners with their own keys makes the de-duplication impossible for the cloud service subscriber as the data encryption with a key converts data into an unidentifiable format called cipher text thus encrypting, even the same data, with different keys may result in different cipher texts. But de-duplication and encryption need to work in hand to hand to ensure secure, authorized and optimized storage. In this paper, we propose a scheme for file-level de-duplication on encrypted files like text, images and even on video files stored in cloud based on the user's privilege set and file privilege set. This paper proposed a de-duplication system which distributes the files across different servers. The system uses an Erasure Correcting Code technique to re-construct the files even if the parts of the files are lost by attacking any server. Thus the proposed system can ensure both the security and reliability of encrypted files.
Jacob, Liya Mary, Sreelakshmi, P, Deepthi, P.P.  2021.  Physical Layer Security in Power Domain NOMA through Key Extraction. 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). :1–7.
Non-orthogonal multiple access (NOMA) is emerging as a popular radio access technique to serve multiple users under the same resource block to improve spectral efficiency in 5G and 6G communication. But the resource sharing in NOMA causes concerns on data security. Since power domain NOMA exploits the difference in channel properties for bandwidth-efficient communication, it is feasible to ensure data confidentiality in NOMA communication through physical layer security techniques. In this work, we propose to ensure resistance against internal eavesdropping in NOMA communication through a secret key derived from channel randomness. A unique secret key is derived from the channel of each NOMA user; which is used to randomize the data of the respective user before superposition coding (SC) to prevent internal eavesdropping. The simulation results show that the proposed system provides very good security against internal eavesdropping in NOMA.
Jacobs, Nicholas, Hossain-McKenzie, Shamina, Vugrin, Eric.  2018.  Measurement and Analysis of Cyber Resilience for Control Systems: An Illustrative Example. 2018 Resilience Week (RWS). :38—46.

Control systems for critical infrastructure are becoming increasingly interconnected while cyber threats against critical infrastructure are becoming more sophisticated and difficult to defend against. Historically, cyber security has emphasized building defenses to prevent loss of confidentiality, integrity, and availability in digital information and systems, but in recent years cyber attacks have demonstrated that no system is impenetrable and that control system operation may be detrimentally impacted. Cyber resilience has emerged as a complementary priority that seeks to ensure that digital systems can maintain essential performance levels, even while capabilities are degraded by a cyber attack. This paper examines how cyber security and cyber resilience may be measured and quantified in a control system environment. Load Frequency Control is used as an illustrative example to demonstrate how cyber attacks may be represented within mathematical models of control systems, to demonstrate how these events may be quantitatively measured in terms of cyber security or cyber resilience, and the differences and similarities between the two mindsets. These results demonstrate how various metrics are applied, the extent of their usability, and how it is important to analyze cyber-physical systems in a comprehensive manner that accounts for all the various parts of the system.

Jacobsen, Hans-Arno, Sadoghi, Mohammad, Tabatabaei, Mohammad Hossein, Vitenberg, Roman, Zhang, Kaiwen.  2018.  Blockchain Landscape and AI Renaissance: The Bright Path Forward. Proceedings of the 19th International Middleware Conference Tutorials. :2:1–2:1.
Known for powering cryptocurrencies such as Bitcoin and Ethereum, blockchain is seen as a disruptive technology capable of revolutionizing a wide variety of domains, ranging from finance to governance, by offering superior security, reliability, and transparency founded upon a decentralized and democratic computational model. In this tutorial, we first present the original Bitcoin design, along with Ethereum and Hyperledger, and reflect on their design choices through the academic lens. We further provide an overview of potential applications and associated research challenges, as well as a survey of ongoing research directions related to byzantine fault-tolerance consensus protocols. We highlight the new opportunities blockchain creates for building the next generation of secure middleware platforms and explore the possible interplay between AI and blockchains, or more specifically, how blockchain technology can enable the notion of "decentralized intelligence." We conclude with a walkthrough demonstrating the process of developing a decentralized application using a popular Smart Contract language (Solidity) over the Ethereum platform
Jacomme, Charlie, Kremer, Steve.  2018.  An Extensive Formal Analysis of Multi-factor Authentication Protocols. 2018 IEEE 31st Computer Security Foundations Symposium (CSF). :1–15.
Passwords are still the most widespread means for authenticating users, even though they have been shown to create huge security problems. This motivated the use of additional authentication mechanisms used in so-called multi-factor authentication protocols. In this paper we define a detailed threat model for this kind of protocols: while in classical protocol analysis attackers control the communication network, we take into account that many communications are performed over TLS channels, that computers may be infected by different kinds of malwares, that attackers could perform phishing, and that humans may omit some actions. We formalize this model in the applied pi calculus and perform an extensive analysis and comparison of several widely used protocols - variants of Google 2-step and FIDO's U2F. The analysis is completely automated, generating systematically all combinations of threat scenarios for each of the protocols and using the P ROVERIF tool for automated protocol analysis. Our analysis highlights weaknesses and strengths of the different protocols, and allows us to suggest several small modifications of the existing protocols which are easy to implement, yet improve their security in several threat scenarios.