Visible to the public Protocol Verification 2015

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

Protocol Verification

2015


Verifying the accuracy of security protocols is a primary goal of cybersecurity. Research into the area has sought to identify
new and better algorithms, and to identify better methods for verifying security protocols in myriad applications and environments. Verification has implications for compositionality and composability, and for policy-based collaboration.
The works cited here were presented in 2015.



Cheng-Rung Tsai; Ming-Chun Hsiao; Wen-Chung Shen; Wu, A.-Y.A.; Chen-Mou Cheng, “A 1.96 mm2 Low-Latency Multi-Mode Crypto-Coprocessor for PKC-Based IoT Security Protocols,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 834–837, 24–27 May 2015. doi:10.1109/ISCAS.2015.7168763
Abstract: In this paper, we present the implementation of a multi-mode crypto-coprocessor, which can support three different public-key cryptography (PKC) engines (NTRU, TTS, Pairing) used in post-quantum and identity-based cryptosystems. The PKC-based security protocols are more energy-efficient because they usually require less communication overhead than symmetric-key-based counterparts. In this work, we propose the first-of-its-kind tri-mode PKC coprocessor for secured data transmission in Internet-of-Things (IoT) systems. For the purpose of low energy consumption, the crypto-coprocessor incorporates three design features, including 1) specialized instruction set for the multi-mode cryptosystems, 2) a highly parallel arithmetic unit for cryptographic kernel operations, and 3) a smart scheduling unit with intelligent control mechanism. By utilizing the parallel arithmetic unit, the proposed crypto-coprocessor can achieve about 50% speed up. Meanwhile, the smart scheduling unit can save up to 18% of the total latency. The crypto-coprocessor was implemented with AHB interface in TSMC 90nm CMOS technology, and the die size is only 1.96 mm2. Furthermore, our chip is integrated with an ARM-based system-on-chip (SoC) platform for functional verification.
Keywords: CMOS integrated circuits; Internet of Things; coprocessors; cryptographic protocols; CMOS technology; Internet-of-Things systems; IoT security protocols; IoT systems; PKC based security protocols; PKC coprocessor; PKC engines; SoC platform; cryptographic kernel operations; functional verification; highly parallel arithmetic unit; identity based cryptosystems; intelligent control mechanism; multimode cryptocoprocessor; parallel arithmetic unit; post quantum cryptosystems; public key cryptography; secured data transmission; smart scheduling unit; symmetric key based counterparts; system-on-chip; Computer architecture; Elliptic curve cryptography; Engines; Polynomials; System-on-chip; IoT; Public-key cryptography; SoC; crypto-coprocessor (ID#: 15-6579)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168763&isnumber=7168553

 

Ihsan, A.; Saghar, K.; Fatima, T., “Analysis of LEACH Protocol(s) Using Formal Verification,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 254–262, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058513
Abstract: WSN nodes operate in an unattended environment and thus have irreplaceable batteries. Thus an important concern is the network lifetime; we need to utilize their energy for a longer time otherwise nodes run out of power. For this purpose various protocols have been established and the subject of our matter is the LEACH protocol. The LEACH protocol is self-organizing and is characterized as an adaptive clustering protocol which uses randomly distributes energy load among nodes. By using cluster heads and data aggregation excessive energy consumption is avoided. In this paper we analyzed LEACH and its extensions like LEACH-C and LEACH-F using Formal modeling techniques. Formal modeling is often used by researchers these days to verify a variety of routing protocols. By using formal verification one can precisely confirm the authenticity of results and worst case scenarios, a solution not possible using computer simulations and hardware implementation. In this paper, we have applied formal verification to compare how efficient LEACH is as compared to its extensions in various WSN topologies. The paper is not about design improvement of LEACH but to formally verify its correctness, efficiency and performance as already stated. This work is novel as LEACH and its extensions according to our knowledge have not been analyzed using formal verification techniques.
Keywords: formal verification; routing protocols; telecommunication power management; wireless sensor networks; LEACH protocol analysis; LEACH-C; LEACH-F; WSN nodes; WSN topologies; adaptive clustering protocol; cluster heads; data aggregation; energy load; formal modeling techniques; formal verification techniques; irreplaceable batteries; network lifetime; routing protocols; unattended environment; Formal verification; IP networks; Routing protocols; Wireless sensor networks; Formal Modeling; Protocol Verification Hierarchal Networks; Routing Protocol; Wireless Sensor Networks (WSN) (ID#: 15-6580)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058513&isnumber=7058466

 

Mahesh, Golla; Sakthivel SM, “Functional Verification of the Axi2OCP Bridge Using System Verilog and Effective Bus Utilization Calculation for AMBA AXI 3.0 Protocol,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1–5, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193091
Abstract: Verification is the process of exploring the correct functioning of the design. It is not possible to guarantee a design without proper verification, because misunderstanding and misinterpretation of the specifications, incorrect interaction between the cores and IPS leads to unexpected behavior of the system. Functional verification plays a key role in validating a design and its Functionality. AXI2OCP Bridge connects two different protocols i.e. advanced extensible interface and open core protocol. AXI2OCP Bridge helps in converting AXI 3.0 format signals to OCP format signals, AXI address to OCP address and AXI data to OCP data. Protocols With effective Bus utilization leads to have a faster data rate with increased performance. Measuring the Bus utilization parameter for the AXI 3.0 protocols generated test cases and functional verification of the AXI2OCP Bridge using system verilog language is the main idea of this paper.
Keywords: Bridges; Computer aided software engineering; Hardware; AXI 3.0 Protocol; AXI2OCP Bridge; Busy count and Bus utilization; Functional verification; Valid count (ID#: 15-6581)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193091&isnumber=7192777

 

Filali, R.; Bouhdadi, M., “Formal Verification of the Lowe Modified BAN Concrete Andrew Secure RPC Protocol,” in RFID and Adaptive Wireless Sensor Networks (RAWSN), 2015 Third International Workshop on, vol., no., pp.18–22, 13–15 May 2015. doi:10.1109/RAWSN.2015.7173272
Abstract: The Andrew Secure RPC (ASRPC) is a protocol that allows two entities, already shared a secret key, to establish a new cryptographic key. It must guarantee the authenticity of the new sharing session key in every session. Since the original protocol, several revised versions have been made in order to make the protocol more secure. In this paper we present a formal development to prove the correctness of the newly modified version, we use the formal method Event-B and the Rodin tool to model the protocol and to verify that the desired security properties hold. We show that the protocol is indeed more secure than the previous versions.
Keywords: cryptographic protocols; formal verification; telecommunication security; ASRPC; Andrew secure RPC protocol; Burrows-Abadi-Needham protocol; Event-B formal method; Lowe modified BAN; Rodin tool; cryptographic key; formal verification; security properties; sharing session key; Authentication; Concrete; Cryptography; Niobium; Protocols; Servers; Andrew Secure RPC; Event-B; Formal Modelling; Refinement; Rodin (ID#: 15-6582)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173272&isnumber=7173261

 

Chaithanya, B.S.; Gopakumar, G.; Krishnan, D.K.; Rao, S.K.; Oommen, B.C., “Assertion Based Reconfigurable Testbenches for Efficient Verification and Better Reusability,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1511–1514, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124840
Abstract: Functional verification being the paramount in the design cycle of hardware IP; ample time and cost are spent for verifying the functional correctness of each and every hardware IP. The modern verification methodologies emphasize the concept of reusability to reduce the verification turn around time. The reusability aspect of each verification IP depends vastly on its implementation strategies & architecture. The paper discusses a methodology to build Reconfigurable Testbenches, for verifying design IPs which do not have any stringent implementation protocol. The proposed verification technique focuses on an approach for reconfiguring the golden behavioral model in the testbench to suit the various functional realizations of the design IP. Configuration parameter, along with Assertions ensures effective reconfigurability and reusability of the verification IP. The entire verification environment is modularized into reusable blocks for modifying the functional requirements at ease. Since the output prediction and checker model is designed independent of a global synchronizing signal with respect to the design under verification (DUV), it ensures minimum modification of the reusable blocks for verifying different user implementations of the DUV.
Keywords: DRAM chips; memory architecture; assertion based reconfigurable testbenches; design IP; design under verification; functional verification; verification IP; Computer architecture; Conferences; Hardware; IP networks; Measurement; Protocols; SDRAM; System Verilog; UVM; assertions; reconfigurable; testbench (ID#: 15-6583)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124840&isnumber=7124722

 

Yitian Gu; Shou-pon Lin; Maxemchuk, N.F., “A Fail Safe Broadcast Protocol for Collaborative Intelligent Vehicles,” in World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2015 IEEE 16th International Symposium on a, vol., no., pp. 1–6, 14–17 June 2015. doi:10.1109/WoWMoM.2015.7158215
Abstract: This paper presents a broadcast protocol that makes cooperative driving applications safer. Collaborative driving is a rapidly evolving trend in intelligent transportation system. Current communication services provided by vehicular ad-hoc network (VANET) cannot guarantee fail-safe operation. We present a fail safe broadcast protocol (FSBP) that resides between the cooperative driving applications and VANET to make the cooperative driving applications work in a safer way. The protocol uses synchronized clocks with the help of GPS to schedule the broadcast transmissions of the participants. Electing not to transmit at a scheduled time is a unique message that cannot be lost because of a noisy or lost communication channel. This message is used to abort a collaborative operation and revert to an autonomous driving mode, similar to the current generation of intelligent vehicles, in which a vehicle protects itself. We describe a particular, simple protocol that uses a token passing mechanism. We specify the protocol as a finite state machine and use probabilistic verification to verify the protocol. This is the first formal verification of a multi-party broadcast protocol.
Keywords: Global Positioning System; access protocols; cooperative communication; finite state machines; intelligent transportation systems; telecommunication scheduling; vehicular ad hoc networks; GPS; VANET; autonomous driving mode; broadcast transmission scheduling; collaborative intelligent vehicles; collaborative operation; fail safe broadcast protocol; finite state machine; intelligent transportation system; lost communication channel; noisy communication channel; probabilistic verification; safer cooperative driving applications; synchronized clocks; token passing mechanism; vehicular ad hoc network; Clocks;Collaboration; Protocols; Receivers; Synchronization; Vehicles (ID#: 15-6584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158215&isnumber=7158105

 

Chen, E.Y.; Shuo Chen; Qadeer, S.; Rui Wang, “Securing Multiparty Online Services via Certification of Symbolic Transactions,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 833–849, 17–21 May 2015. doi:10.1109/SP.2015.56
Abstract: The prevalence of security flaws in multiparty online services (e.g., Single-sign-on, third-party payment, etc.) calls for rigorous engineering supported by formal program verification. However, the adoption of program verification faces several hurdles in the real world: how to formally specify logic properties given that protocol specifications are often informal and vague, how to precisely model the attacker and the runtime platform, how to deal with the unbounded set of all potential transactions. We introduce Certification of Symbolic Transaction (CST), an approach to significantly lower these hurdles. CST tries to verify a protocol-independent safety property jointly defined over all parties, thus avoids the burden of individually specifying every party’s property for every protocol, CST invokes static verification at runtime, i.e., It symbolically verifies every transaction on-the-fly, and thus (1) avoids the burden of modeling the attacker and the runtime platform, (2) reduces the proof obligation from considering all possible transactions to considering only the one at hand. We have applied CST on five commercially deployed applications, and show that, with only tens (or 100+) of lines of code changes per party, the original implementations are enhanced to achieve the objective of CST. Our security analysis shows that 12 out of 14 logic flaws reported in the literature will be prevented by CST. We also stress-tested CST by building a gambling system integrating four different services, for which there is no existing protocol to follow. Because transactions are symbolic and cacheable, CST has near-zero amortized runtime overhead. We make the source code of these implementations public, which are ready to be deployed for real-world uses.
Keywords: cache storage; formal verification; security of data; symbol manipulation; CST; attacker modeling; cacheable transactions; certification of symbolic transaction; code changes; commercially deployed applications; formal program verification; gambling system; logic flaws; logic properties; multiparty online service security; near-zero amortized runtime overhead; proof obligation; protocol specifications; protocol-independent safety property; runtime platform; security analysis; security flaws; static verification; Certification; Data structures; Facebook; Protocols; Runtime; Security; Servers; CST; multiparty protocol; online payment; single-sign-on; symbolic transaction; verification (ID#: 15-6585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163063&isnumber=7163005

 

Saleh, M.; El-Meniawy, N.; Sourour, E., “Authentication in Flat Wireless Sensor Networks with Mobile Nodes,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on,  vol., no., pp. 208–212, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116036
Abstract: Secure communication in Wireless Sensor Networks (WSNs) requires the verification of the identities of network nodes. This is to prevent a malicious intruder from injecting false data into the network. We propose an entity authentication protocol for WSNs, and show how its execution can be integrated as part of routing protocols. The integration between routing and authentication leads to two benefits. First, authentication is guided by routing; only nodes on a data path to the base station authenticate each other. Unnecessary protocol executions are therefore eliminated. Second, malicious nodes are not able to use the routing protocol to insert themselves into data paths. Our protocol assumes a flat WSN, i.e., no clustering or cluster heads. We also deal with node mobility issues by proposing a re-authentication protocol that an initially authenticated node uses when its position changes. Finally, we show how to implement the protocol using the TinyOS operating system.
Keywords: cryptographic protocols; routing protocols; telecommunication security; wireless sensor networks; TinyOS operating system; flat wireless sensor networks; malicious nodes; mobile nodes; network nodes; re-authentication protocol; routing protocol; secure communication; Authentication; Base stations; Cryptography; Protocols; Routing; Wireless sensor networks (ID#: 15-6586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116036&isnumber=7115994

 

Ray, Biplob; Chowdhury, Morshed; Abawajy, Jemal; Jesmin, Monika, “Secure Object Tracking Protocol for Networked RFID Systems,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1–7, 1–3 June 2015. doi:10.1109/SNPD.2015.7176190
Abstract: Networked systems have adapted Radio Frequency identification technology (RFID) to automate their business process. The Networked RFID Systems (NRS) has some unique characteristics which raise new privacy and security concerns for organizations and their NRS systems. The businesses are always having new realization of business needs using NRS. One of the most recent business realization of NRS implementation on large scale distributed systems (such as Internet of Things (IoT), supply chain) is to ensure visibility and traceability of the object throughout the chain. However, this requires assurance of security and privacy to ensure lawful business operation. In this paper, we are proposing a secure tracker protocol that will ensure not only visibility and traceability of the object but also genuineness of the object and its travel path on-site. The proposed protocol is using Physically Unclonable Function (PUF), Diffie-Hellman algorithm and simple cryptographic primitives to protect privacy of the partners, injection of fake objects, non-repudiation, and unclonability. The tag only performs a simple mathematical computation (such as combination, PUF and division) that makes the proposed protocol suitable to passive tags. To verify our security claims, we performed experiment on Security Protocol Description Language (SPDL) model of the proposed protocol using automated claim verification tool Scyther. Our experiment not only verified our claims but also helped us to eliminate possible attacks identified by Scyther.
Keywords: Mathematical model; Privacy; Protocols; Radiofrequency identification; Security; Supply chains; IoT; NRS; PUF; RFID; injection of fake objects; non-repudiation; privacy; protocol; tracker; unclonable (ID#: 15-6587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176190&isnumber=7176160

 

Rehman, U.U.; Abbasi, A.G., “Secure Layered Architecture for Session Initiation Protocol Based on SIPSSO: Formally Proved by Scyther,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp.185–190, 13–15 April 2015. doi:10.1109/ITNG.2015.35
Abstract: Voice over Internet Protocol (VoIP) is one of the most popular technologies nowadays that facilitate the user by providing different features as instant messages, phone calls, video calls, and voicemails. Basic VoIP protocols were designed to be efficient instead of secure. After numerous attacks on these protocols several solutions were proposed to prevent against these threats. In this paper, we focus on the security of Session Initiation Protocol (SIP) that is used to initiate, modify, and terminate the VoIP sessions. The paper presents the design and implementation of secure layered architecture for SIP, which adds a new layer to the standard SIP layer model and entitled as Security layer. The Security layer provides authentication, authorization, adaptable feature, and secure key exchange, based on our newly designed protocol, named as Session Initiation Protocol using Single Sign-On (SIPSSO). In order to implement the secure layered architecture based on SIPSSO, we have developed an Android Secure Call application and extend the open source Asterisk accordingly. After the designing and implementation phases, we have verified the SIPSSO protocol formally by using an automated security verification tool, Scyther. Our analysis results reveal that by adding Security layer, we ensured protection against different SIP attacks such as Eavesdropping, Man In The Middle (MITM) attack, Message Tampering, Replay attack, Session Teardown, and Spam over Internet Telephony (SPIT).
Keywords: Internet telephony; authorisation; electronic messaging; formal verification; public domain software; signalling protocols; voice mail; Android secure call application; SIPSSO; Scyther; Session Initiation Protocol using Single Sign-On; VoIP; Voice over Internet Protocol; adaptable feature; authentication; authorization; automated security verification tool; instant messages; open source Asterisk; phone calls; secure key exchange; secure layered architecture; standard SIP layer model; video calls; voicemails; Authentication; Cryptography; Lead; Multimedia communication; Protocols; Standards; Asterisk; Cryptographic Token; SIP; SIPSSO; Secure Layered Architecture; SecureCall; VoIP (ID#: 15-6588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113470&isnumber=7113432

 

Chen, Jing; Yuan, Quan; Xue, Guoliang; Du, Ruiying, “Game-Theory-Based Batch Identification of Invalid Signatures in Wireless Mobile Networks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 262–270, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218390
Abstract: Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.
Keywords: Batch identification; game theory; wireless mobile networks (ID#: 15-6589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218390&isnumber=7218353

 

Malekpour, M.R., “A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol,” in Aerospace Conference, 2015 IEEE, vol., no., pp. 1–11, 7–14 March 2015. doi:10.1109/AERO.2015.7119170
Abstract: This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.
Keywords: fault tolerance; protocols; synchronisation; Byzantine fault; Byzantine general problem; SMV; arbitrary faulty node; linear convergence; mechanical verification; network element; self-stabilization period; self-stabilizing hybrid fault-tolerant synchronization protocol; symbolic model verifier; symmetric-fault tolerant algorithm; Biographies; NASA; Protocols; Synchronization (ID#: 15-6590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119170&isnumber=7118873

 

Wu, Zhizheng; Khodabakhsh, Ali; Demiroglu, Cenk; Yamagishi, Junichi; Saito, Daisuke; Toda, Tomoki; King, Simon, “SAS: A Speaker Verification Spoofing Database Containing Diverse Attacks,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 4440–4444, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178810
Abstract: This paper presents the first version of a speaker verification spoofing and anti-spoofing database, named SAS corpus. The corpus includes nine spoofing techniques, two of which are speech synthesis, and seven are voice conversion. We design two protocols, one for standard speaker verification evaluation, and the other for producing spoofing materials. Hence, they allow the speech synthesis community to produce spoofing materials incrementally without knowledge of speaker verification spoofing and anti-spoofing. To provide a set of preliminary results, we conducted speaker verification experiments using two state-of-the-art systems. Without any anti-spoofing techniques, the two systems are extremely vulnerable to the spoofing attacks implemented in our SAS corpus.
Keywords: Databases; Speech; Speech synthesis; Standards; Synthetic aperture sonar;Training; Database; security; speaker verification; speech synthesis; spoofing attack; voice conversion (ID#: 15-6591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178810&isnumber=7177909

 

Hao Cai; Wolf, T., “Source Authentication and Path Validation with Orthogonal Network Capabilities,” in Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, vol., no., pp. 111–112, April 26 2015–May 1 2015. doi:10.1109/INFCOMW.2015.7179368
Abstract: In-network source authentication and path validation are fundamental primitives to construct security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, most of the existing approaches are based on cryptographic techniques. The high computational cost of cryptographic operations makes these techniques fall short in the data plane of the network, where potentially every packet needs to be checked at Gigabit per second link rates in the future Internet. In this paper, we propose a new protocol, which uses a set of orthogonal sequences as credentials, to solve this problem, which enables a low overhead of verification in routers. Our evaluation of a prototype experiment demonstrates the fast verification speed and low storage consumption of our protocol, while providing reasonable security properties.
Keywords: Internet; authorisation; computer network security; cryptographic protocols; Gigabit per second link rates; cryptographic operations; in-network source authentication; orthogonal network capabilities; path validation; Authentication; Conferences; Cryptography; Optimized production technology; Routing protocols (ID#: 15-6592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179368&isnumber=7179273

 

Ammayappan, K., “TSM Centric Privacy Preserving NFC Mobile Payment Framework with Formal Verification,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp.1490–1496, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124834
Abstract: Near field communication is on the verge of broad adoption worldwide as the NFC controllers and Secure Elements (SEs) are now commonplace components in many models of smart phones and point of sale devices currently available in the market. The combination of NFC and smart devices are making the way of life more easier for millennials. Data aggregation is a common phenomena in any e-commerce method. From the aggregated data, sensitive consumer information can be predicted using predictive data mining approaches. Consumers are not aware of their information privacy breach. The business models of the e-commerce industry players are designed in such a way that they can make use of their customers information to enhance their profitability by offering customer friendly services. Ultimately consumer’s sensitive information potentially sits on unsafe retailer’s sever on which consumer has no control and hackers can always potentially find a way into a system. This paper proposes a new TSM centric privacy preserving framework and a protocol for NFC based proximity payments which prevents consumer data from ever touching a merchant’s server where the majority of data breaches occur. The correctness of proposed privacy preserving NFC payment protocol is ensured here via formal modeling and verification using Proverif.
Keywords: data privacy; electronic commerce; formal verification; mobile computing; near-field communication; smart phones; NFC based proximity payments; NFC controllers; NFC mobile payment framework; Proverif; TSM centric privacy preserving framework; consumer data; data aggregation; e-commerce method; electronic commerce; formal modeling; formal verification; near field communications; point-of-sale devices; predictive data mining; secure elements; smart phones; Authentication; Business; Cryptography; Mobile communication; Mobile handsets; Privacy; Protocols (ID#: 15-6593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124834&isnumber=7124722

 

Alotaibi, A.; Mahmmod, A., “Enhancing OAuth Services Security by an Authentication Service with Face Recognition,” in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island,  vol., no., pp. 1–6, 1–1 May 2015. doi:10.1109/LISAT.2015.7160208
Abstract: Controlling secure access to web Application Programming Interfaces (APIs) and web services has become more vital with advancement and use of the web technologies. The security of web services APIs is encountering critical issues in managing authenticated and authorized identities of users. Open Authorization (OAuth) is a secure protocol that allows the resource owner to grant permission to a third-party application in order to access the resource owner’s protected resource on their behalf, without releasing their credentials. Most web APIs are still using the traditional authentication which is vulnerable to many attacks such as man-in-the middle attack. To reduce such vulnerability, we enhance the security of OAuth through the implementation of a biometric service. We introduce a face verification system based on Local Binary Patterns as an authentication service handled by the authorization server. The entire authentication process consists of three services: Image registration service, verification service, and access token service. The developed system is most useful in securing those services where a human identification is required.
Keywords: Web services; application program interfaces; authorisation; biometrics (access control); face recognition; image registration; OAuth service security; Web application programming interfaces; Web services API; Web technologies; access token service; authentication service; authorization server; biometric service; face verification system; human identification; image registration service; local binary patterns; open authorization; resource owner protected resource; third-party application; verification service; Authentication; Authorization; Databases; Protocols; Servers; Access Token; Face Recognition; OAuth; Open Authorization; Web API; Web Services (ID#: 15-6594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160208&isnumber=7160171

 

Ustaoglu, B.; Ors, B., “Design and Implementation of a Custom Verification Environment for Fault Injection and Analysis on an Embedded Microprocessor,” in Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015 Third International Conference on, vol., no., pp. 256–261, April 29 2015–May 1 2015. doi:10.1109/TAEECE.2015.7113636
Abstract: Embedded microprocessors are widely used in most of the safety critical digital system applications. A fault in a single bit in the microprocessors may cause soft errors. It has different affects on the program outcome whether the fault changes a situation in the application. In order to analyse the behaviour of the applications under the faulty conditions we have designed a custom verification system. The verification system has two parts as Field Programmable Gate Array (FPGA) and personnel computer (PC). We have modified Natalius open source microprocessor in order to inject stuck-at-faults into it. We have handled a fault injection method and leveraged it to increase randomness. On FPGA, we have implemented modified Natalius microprocessor, the fault injection method and the communication protocol. Then the “Most Significant Bit First Multiplication Algorithm” has been implemented on the microprocessor as an application. We have prepared an environment which sends inputs to and gets outputs from the Natalius microprocessor on PC part. Finally, we have analysed our application by injecting faults in specific location and random location in register file to make some classifications for effects of the injected faults.
Keywords: embedded systems; fault location; field programmable gate arrays; microprocessor chips; FPGA; Natalius open source microprocessor; PC; application behaviour analysis; communication protocol; custom verification environment design; custom verification environment implementation; embedded microprocessor; fault analysis; faulty conditions; field programmable gate array; most-significant bit first-multiplication algorithm; personnel computer; random location; register file; safetycritical digital system applications; soft errors; specific location; stuck-at-fault injection; Algorithm design and analysis; Circuit faults; Fault location; Hardware; Microprocessors; Random access memory; Registers; Analysis; Design; Fault Injection; Microprocessor (ID#: 15-6595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113636&isnumber=7113589

 

Kyoungha Kim; Yanggon Kim, “Comparative Analysis on the Signature Algorithms to Validate AS Paths in BGPsec,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 53–58, June 28 2015–July 1 2015. doi:10.1109/ICIS.2015.7166569
Abstract: Because of the lack of security in Border Gateway Protocol (BGP) and its world-wide coverage, BGP is categorized into one of the most vulnerable network protocols. As the Internet grows all around, BGP, which lays the groundwork for all network protocols by connecting all of them together, is being updated by protocol designers in security. The most noticeable topic to secure BGP is to validate paths in BGP. At this point, the most plausible solution to protect BGP paths is BGPsec. However, validating paths in BGPsec gives much more pressure to BGP in routing performance than validating the origin of a BGP message. In order to maximize the path-validating performance, BGPsec currently uses Elliptic Curve Digital Signature Algorithm (ECDSA), which is well known as one of best asymmetric cryptographic algorithms in performance. However, is ECDSA really better than the signature algorithms (i.e., DSA or RSA) originally used in BGP? In this paper, we found that RSA is better than ECDSA in BGPsec due to its outstanding verification speed. Among the signature algorithms (i.e., DSA, RSA, and ECDSA) that are utilized for RPKI and BGPsec, we argue that RSA is the best one in performance to validate paths in BGP Update messages.
Keywords: cryptographic protocols; digital signatures; internetworking; network servers; public key cryptography; AS paths; BGP update messages; BGPsec; DSA; ECDSA; RSA; asymmetric cryptographic algorithms; border gateway protocol; elliptic curve digital signature algorithm; network protocols; path-validating performance; protocol designers; verification speed; Delays; IP networks; Internet; Routing; Routing protocols; Security (ID#: 15-6596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166569&isnumber=7166553

 

Kar, J.; Alghazzawi, D.M., “On Construction of Signcryption Scheme for Smart Card Security,” in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, vol., no., pp.109–113, 27–29 May 2015. doi:10.1109/ISI.2015.7165948
Abstract: The article proposes a novel construction of sign-cryption scheme with provable security which is most suited to be implement on smart card. It is secure in random oracle model and the security relies on Decisional Bilinear Diffie-Hellmann Problem. The proposed scheme is secure against adaptive chosen ciphertext attack (indistiguishbility) and adaptive chosen message attack (unforgeability). The scheme have the security properties anonymity and forward security. Also it is inspired by zero-knowledge proof and is publicly verifiable. The scheme has applied for mutual authentication to authenticate identity of smart card’s user and reader via Application protocol Data units. This can be achieved by the verification of the signature of the proposed scheme. Also the sensitive information are stored in the form of ciphertext in Read Only Memory of smart cards. These functions are performed in one logical step at a low computational cost.
Keywords: authorisation; public key cryptography; smart cards; adaptive chosen ciphertext attack; adaptive chosen message attack; anonymity property; application protocol data units; decisional bilinear Diffie-Hellmann problem; forward security property; indistiguishbility attack; mutual authentication; provable security; random oracle model; read only memory; signcryption scheme; smart card security; unforgeability attack; user authentication; zero-knowledge proof; Computational efficiency; Computational modeling; Elliptic curve cryptography; Receivers; Smart cards; Provable security; Random oracle; Unforgebility (ID#: 15-6597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165948&isnumber=7165923

 

Alshinina, R.; Elleithy, K.; Aljanobi, F., “A Highly Efficient and Secure Shared Key for Direct Communications Based on Quantum Channel,” in Wireless Telecommunications Symposium (WTS), 2015, vol., no., pp. 1–6, 15–17 April 2015. doi:10.1109/WTS.2015.7117250
Abstract: The reported research in literature for message transformation by a third party does not provide the necessary efficiency and security against different attacks. The data transmitted through the computer network must be confidential and authenticated in advance. In this paper, we develop and improve security of the braided single stage quantum cryptography. This improvement is based on a novel authentication algorithm by using signature verification without using the three stages protocol to share the secret key between the sender and receiver. This approach will work against attacks such as replay and man-in-the-middle by increasing the security as well as the over efficiency, reducing the overhead through using three stages and increasing the speed of the communication between two parties.
Keywords: computer network security; data communication; digital signatures; handwriting recognition; private key cryptography; quantum cryptography; authentication algorithm; braided single stage quantum cryptography security; computer network security; data transmission; direct communication; overhead reduction; quantum channel; secret key sharing; signature verification; Photonics; Protocols; Public key; Receivers; Transforms; Braided Single Stage Protocol (BSSP); Signature Verification; Three Stages Protocol (TSP); authentication; quantum cryptography (QC); quantum key distribution protocol (QKD) (ID#: 15-6598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7117250&isnumber=7117237

 

Boyuan He; Rastogi, V.; Yinzhi Cao; Yan Chen; Venkatakrishnan, V.N.; Runqing Yang; Zhenrui Zhang, “Vetting SSL Usage in Applications with SSLINT,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 519–534, 17–21 May 2015. doi:10.1109/SP.2015.38
Abstract: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
Keywords: Linux; application program interfaces; formal verification; program diagnostics; protocols; security of data; API design; Linux distributions; SSL usage vetting; SSL-TLS protocols; SSLINT; Ubuntu; application program interfaces; automatic logic verification; code quality; logic correctness; network attacks; secure sockets layer; static analysis system; transport layer security; Accuracy; Libraries; Protocols; Security; Servers; Software; Testing (ID#: 15-6599)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163045&isnumber=7163005

 

Costello, C.; Fournet, C.; Howell, J.; Kohlweiss, M.; Kreuter, B.; Naehrig, M.; Parno, B.; Zahur, S., “Geppetto: Versatile Verifiable Computation,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 253–270, 17–21 May 2015. doi:10.1109/SP.2015.23
Abstract: Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client’s cost to verify the correctness of their results, but the overhead to produce proofs remains largely impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi QAPs, Geppetto reduces the cost of sharing state between computations (e.g, For MapReduce) or within a single computation by up to two orders of magnitude. Via a careful choice of cryptographic primitives, Geppetto’s instantiation of bounded proof bootstrapping improves on prior bootstrapped systems by up to five orders of magnitude, albeit at some cost in universality. Geppetto also efficiently verifies the correct execution of proprietary (i.e, Secret) algorithms. Finally, Geppetto’s use of energy-saving circuits brings the prover’s costs more in line with the program’s actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler and runtime that consume LLVM code generated from a variety of source C programs and cryptographic libraries.
Keywords: cloud computing; computer bootstrapping; cryptographic protocols; program compilers; program verification; Geppetto; LLVM code generation; QAPs; bootstrapped systems; bounded proof bootstrapping; cloud computing; compiler; correctness verification; cryptographic libraries; cryptographic primitives; energy-saving circuits; outsource computation security; prover flexibility; prover overhead reduction; source C programs; verifiable computation protocols; Cryptography; Generators; Libraries; Logic gates; Protocols; Random access memory; Schedules (ID#: 15-6600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163030&isnumber=7163005

 

Chin, T.; Mountrouidou, X.; Xiangyang Li; Kaiqi Xiong, “Selective Packet Inspection to Detect DoS Flooding Using Software Defined Networking (SDN),” in Distributed Computing Systems Workshops (ICDCSW), 2015 IEEE 35th International Conference on, vol., no., pp. 95–99, June 29 2015–July 2 2015 doi:10.1109/ICDCSW.2015.27
Abstract: Software-defined networking (SDN) and Open Flow have been driving new security applications and services. However, even if some of these studies provide interesting visions of what can be achieved, they stop short of presenting realistic application scenarios and experimental results. In this paper, we discuss a novel attack detection approach that coordinates monitors distributed over a network and controllers centralized on an SDN Open Virtual Switch (OVS), selectively inspecting network packets on demand. With different scale of network views and information availability, these two elements collaboratively detect signature constituents of an attack. Therefore, this approach is able to quickly issue an alert against potential threats followed by careful verification for high accuracy, while balancing the workload on the OVS. We have applied this method for detection and mitigation of TCP SYN flood attacks on Global Environment for Network Innovations (GENI). This realistic experimentation has provided us with insightful findings helpful toward a systematic methodology of SDN-supported attack detection and containment.
Keywords: computer network security; software defined networking; DoS flooding; GENI; OVS; Open Flow; SDN open virtual switch; TCP SYN flood attacks; global environment for network innovations; novel attack detection approach; selective packet inspection; software defined networking; Collaboration; Correlation; Correlators;  IP networks; Monitoring; Protocols; Security; DoS; Intrusion Detection; SDN; Selective Packet Inspection (ID#: 15-6601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165090&isnumber=7165001

 

Ben Henda, N.; Norrman, K.; Pfeffer, K., “Formal Verification of the Security for Dual Connectivity in LTE,” in Formal Methods in Software Engineering (FormaliSE), 2015 IEEE/ACM 3rd FME Workshop on , vol., no., vol., no., pp.13–19, 18–18 May 2015. doi:10.1109/FormaliSE.2015.10
Abstract: We describe our experiences from using formal verification tools during the standardization process of Dual Connectivity, a new feature in LTE developed by 3Gvol., no., pp. To the best of our knowledge, this is the first report of its kind in the telecom industry. We present a model for key establishment of this feature and provide a detailed account on its formal analysis using three popular academic tools in order to automatically prove the security properties of secrecy, agreement and key freshness. The main purpose of using the tools during standardization is to evaluate their suitability for modeling a rapidly changing system as it is developed and in the same time raising the assurance level before the system is deployed.
Keywords: Long Term Evolution; formal verification; telecommunication computing; telecommunication security; 3GPP; LTE; Long Term Evolution; agreement property; dual connectivity security; formal analysis; formal verification; key freshness property; secrecy property; security property; standardization; telecom industry; Encryption; Long Term Evolution; Peer-to-peer computing; Protocols; LTE; formal verification; model checking; security protocols; telecom (ID#: 15-6602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166552&isnumber=7166542

 

Singh, K.J.; De, T., “DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA,” in Computational Intelligence and Networks (CINE), 2015 International Conference on, vol., no., pp.196–197, 12–13 Jan. 2015. doi:10.1109/CINE.2015.47
Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day’s we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.
Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 15-6603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7053830&isnumber=7053782

 

Kieseberg, P.; Fruhwirt, P.; Schrittwieser, S.; Weippl, E., “Security Tests for Mobile Applications — Why Using TLS/SSL is Not Enough,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107416
Abstract: Security testing is a fundamental aspect in many common practices in the field of software testing. Still, the used standard security protocols are typically not questioned and not further analyzed in the testing scenarios. In this work we show that due to this practice, essential potential threats are not detected throughout the testing phase and the quality assurance process. We put our focus mainly on two fundamental problems in the area of security: The definition of the correct attacker model, as well as trusting the client when applying cryptographic algorithms.
Keywords: cryptographic protocols; mobile computing; program testing; quality assurance; software quality; TLS-SSL; correct attacker model; cryptographic algorithms; mobile applications; quality assurance process; security testing; software testing; standard security protocols; Encryption; Mobile communication; Protocols; Servers; Software; Testing; Security; TLS/SSL; Testing (ID#: 15-6604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107416&isnumber=7107396

 

Carbone, R.; Compagna, L.; Panichella, A.; Ponta, S.E., “Security Threat Identification and Testing,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–8, 13–17 April 2015. doi:10.1109/ICST.2015.7102630
Abstract: Business applications are more and more collaborative (cross-domains, cross-devices, service composition). Security shall focus on the overall application scenario including the interplay between its entities/devices/services, not only on the isolated systems within it. In this paper we propose the Security Threat Identification And TEsting (STIATE) toolkit to support development teams toward security assessment of their under-development applications focusing on subtle security logic flaws that may go undetected by using current industrial technology. At design-time, STIATE supports the development teams toward threat modeling and analysis by identifying automatically potential threats (via model checking and mutation techniques) on top of sequence diagrams enriched with security annotations (including WHAT-IF conditions). At run-time, STIATE supports the development teams toward testing by exploiting the identified threats to automatically generate and execute test cases on the up and running application. We demonstrate the usage of the STIATE toolkit on an application scenario employing the SAML Single Sign-On multi-party protocol, a well-known industrial security standard largely studied in previous literature.
Keywords: computer crime; program testing; program verification; SAML; STIATE toolkit; WHAT-IF conditions; business applications; design-time; development teams; industrial security standard; industrial technology; model checking; mutation techniques; security annotations; security assessment; security logic flaws; security threat identification and testing; sequence diagrams; single sign-on multiparty protocol; test cases; threat analysis; threat modeling; under-development applications; Authentication; Business; Engines; Protocols; Testing; Unified modeling language (ID#: 15-6605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102630&isnumber=7102573

 

Afzal, Z.; Lindskog, S., “Automated Testing of IDS Rules,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords: program testing; security of data; IDS rules; automated testing; intrusion detection systems; Conferences; Generators; Intrusion detection; Payloads; Protocols; Servers; Testing (ID#: 15-6606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107461&isnumber=7107396 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.