Science of Security (SoS) Newsletter (2015 - Issue 10)

 

Newsletter Banner

Science of Security (SoS) Newsletter (2015 - Issue 10)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

Publications of Interest

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents

Science of Security (SoS) Newsletter (2015 - Issue 10)

(ID#:15-7667)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Security Related Conferences

 

 
SoS Logo

International Security Related Conferences

 

The following pages provide highlights on Science of Security related research presented at the following International Conferences.

(ID#: 15-7669)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

International Conferences: MobiCom 2015, Paris

 

 
SoS Logo

International Conferences:

Mobile Computing and Networking 2015 

Paris


The 21st Annual International Conference on Mobile Computing and Networking (MobiCom ’15) was held September 7–11, 2015 in Paris, France. MobiCom is a forum for research in mobile systems and wireless networks. The technical program this year featured papers on energy, sensing, security, wireless access, applications, localization, Internet of Things, mobile cloud, measurement, and analysis. The ones cited here focus on Science of Security.



Teng Wei, Shu Wang, Anfu Zhou, Xinyu Zhang; “Acoustic Eavesdropping through Wireless Vibrometry,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 130–141. doi:10.1145/2789168.2790119
Abstract: Loudspeakers are widely used in conferencing and infotainment systems. Private information leakage from loudspeaker sound is often assumed to be preventable using sound-proof isolators like walls. In this paper, we explore a new acoustic eavesdropping attack that can subvert such protectors using radio devices. Our basic idea lies in an acoustic-radio transformation (ART) algorithm, which recovers loudspeaker sound by inspecting the subtle disturbance it causes to the radio signals generated by an adversary or by its co-located WiFi transmitter. ART builds on a modeling framework that distills key factors to determine the recovered audio quality. It incorporates diversity mechanisms and noise suppression algorithms that can boost the eavesdropping quality. We implement the ART eavesdropper on a software-radio platform and conduct experiments to verify its feasibility and threat level. When targeted at vanilla PC or smartphone loudspeakers, the attacker can successfully recover high-quality audio even when blocked by sound-proof walls. On the other hand, we propose several pragmatic countermeasures that can effectively reduce the attacker’s audio recovery quality by orders of magnitude.
Keywords: acoustic eavesdropping, acoustic-radio transformation, wifi devices (ID#: 15-6893)
URL: http://doi.acm.org/10.1145/2789168.2790119

 

Jian Liu, Yan Wang, Gorkem Kar, Yingying Chen, Jie Yang, Marco Gruteser; “Snooping Keystrokes with mm-level Audio Ranging on a Single Phone,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 142–154. doi:10.1145/2789168.2790122
Abstract: This paper explores the limits of audio ranging on mobile devices in the context of a keystroke snooping scenario. Acoustic keystroke snooping is challenging because it requires distinguishing and labeling sounds generated by tens of keys in very close proximity. Existing work on acoustic keystroke recognition relies on training with labeled data, linguistic context, or multiple phones placed around a keyboard — requirements that limit usefulness in an adversarial context. In this work, we show that mobile audio hardware advances can be exploited to discriminate mm-level position differences and that this makes it feasible to locate the origin of keystrokes from only a single phone behind the keyboard. The technique clusters keystrokes using time-difference of arrival measurements as well as acoustic features to identify multiple strokes of the same key. It then computes the origin of these sounds precise enough to identify and label each key. By locating keystrokes this technique avoids the need for labeled training data or linguistic context. Experiments with three types of keyboards and off-the-shelf smartphones demonstrate scenarios where our system can recover $94\%$ of keystrokes, which to our knowledge, is the first single-device technique that enables acoustic snooping of passwords.
Keywords: audio ranging, keystroke snooping, single phone, time difference of arrival (TDoA) (ID#: 15-6894)
URL: http://doi.acm.org/10.1145/2789168.2790122

 

He Wang, Ted Tsung-Te Lai, Romit Roy Choudhury; “MoLe: Motion Leaks Through Smartwatch Sensors,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 155–166. doi:10.1145/2789168.2790121
Abstract: Imagine a user typing on a laptop keyboard while wearing a smart watch. This paper asks whether motion sensors from the watch can leak information about what the user is typing. While its not surprising that some information will be leaked, the question is how much? We find that when motion signal processing is combined with patterns in English language, the leakage is substantial. Reported results show that when a user types a word $W$, it is possible to shortlist a median of 24 words, such that $W$ is in this shortlist. When the word is longer than $6$ characters, the median shortlist drops to $10$. Of course, such leaks happen without requiring any training from the user, and also under the (obvious) condition that the watch is only on the left hand. We believe this is surprising and merits awareness, especially in light of various continuous sensing apps that are emerging in the app market. Moreover, we discover additional “leaks” that can further reduce the shortlist — we leave these exploitations to future work.
Keywords: Bayesian inference, accelerometer, gesture, gyroscope, malware, motion leaks, security, side-channel attacks, smartwatch (ID#: 15-6895)
URL: http://doi.acm.org/10.1145/2789168.2790121

 

Anastasia Shuba, Anh Le, Minas Gjoka, Janus Varmarken, Simon Langhoff, Athina Markopoulou; “AntMonitor: A System for Mobile Traffic Monitoring and Real-Time Prevention of Privacy Leaks,” MobiCom '15 Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 170–172. doi:10.1145/2789168.2789170
Abstract: Mobile devices play an essential role in the Internet today, and there is an increasing interest in using them as a vantage point for network measurement from the edge. At the same time, these devices store personal, sensitive information, and there is a growing number of applications that leak it. We propose AntMonitor—the first system of its kind that supports (i) collection of large-scale, semantic-rich network traffic in a way that respects users’ privacy preferences and (ii) detection and prevention of leakage of private information in real time. The first property makes AntMonitor a powerful tool for network researchers who want to collect and analyze large-scale yet fine-grained mobile measurements. The second property can work as an incentive for using AntMonitor and contributing data for analysis. As a proof-of-concept, we have developed a prototype of AntMonitor, deployed it to monitor 9 users for 2 months, and collected and analyzed 20 GB of mobile data from 151 applications. Preliminary results show that fine-grained data collected from AntMonitor could enable application classification with higher accuracy than state-of-the-art approaches. In addition, we demonstrated that AntMonitor could help prevent several apps from leaking private information over unencrypted traffic, including phone numbers, emails, and device identifiers.
Keywords: android security, mobile network monitoring, privacy leakage detection (ID#: 15-6896)
URL: http://doi.acm.org/10.1145/2789168.2789170

 

Wei Wang, Alex X. Liu, Muhammad Shahzad, Kang Ling, Sanglu Lu; “Understanding and Modeling of WiFi Signal Based Human Activity Recognition,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 65–76. doi:10.1145/2789168.2790093
Abstract: Some pioneer WiFi signal based human activity recognition systems have been proposed. Their key limitation lies in the lack of a model that can quantitatively correlate CSI dynamics and human activities. In this paper, we propose CARM, a CSI based human Activity Recognition and Monitoring system. CARM has two theoretical underpinnings: a CSI-speed model, which quantifies the correlation between CSI value dynamics and human movement speeds, and a CSI-activity model, which quantifies the correlation between the movement speeds of different human body parts and a specific human activity. By these two models, we quantitatively build the correlation between CSI value dynamics and a specific human activity. CARM uses this correlation as the profiling mechanism and recognizes a given activity by matching it to the best-fit profile. We implemented CARM using commercial WiFi devices and evaluated it in several different environments. Our results show that CARM achieves an average accuracy of greater than 96%.
Keywords: activity recognition, channel state information (CSI), wifi (ID#: 15-6897)
URL: http://doi.acm.org/10.1145/2789168.2790093

 

Kamran Ali, Alex Xiao Liu, Wei Wang, Muhammad Shahzad; “Keystroke Recognition Using WiFi Signals,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 90–102.  doi:10.1145/2789168.2790109
Abstract: Keystroke privacy is critical for ensuring the security of computer systems and the privacy of human users as what being typed could be passwords or privacy sensitive information. In this paper, we show for the first time that WiFi signals can also be exploited to recognize keystrokes. The intuition is that while typing a certain key, the hands and fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time-series of Channel State Information (CSI) values, which we call CSI-waveform for that key. In this paper, we propose a WiFi signal based keystroke recognition system called WiKey. WiKey consists of two Commercial Off-The-Shelf (COTS) WiFi devices, a sender (such as a router) and a receiver (such as a laptop). The sender continuously emits signals and the receiver continuously receives signals. When a human subject types on a keyboard, WiKey recognizes the typed keys based on how the CSI values at the WiFi signal receiver end. We implemented the WiKey system using a TP-Link TL-WR1043ND WiFi router and a Lenovo X200 laptop. WiKey achieves more than 97.5\% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.
Keywords: channel state information, cots wifi devices, gesture recognition, keystroke recovery, wireless security (ID#: 15-6898)
URL: http://doi.acm.org/10.1145/2789168.2790109

 

Yanzi Zhu, Yibo Zhu, Ben Y. Zhao, Haitao Zheng; “Reusing 60GHz Radios for Mobile Radar Imaging,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 103–116. doi:10.1145/2789168.2790112
Abstract: The future of mobile computing involves autonomous drones, robots and vehicles. To accurately sense their surroundings in a variety of scenarios, these mobile computers require a robust environmental mapping system. One attractive approach is to reuse millimeterwave communication hardware in these devices, e.g. 60GHz networking chipset, and capture signals reflected by the target surface. The devices can also move while collecting reflection signals, creating a large synthetic aperture radar (SAR) for high-precision RF imaging. Our experimental measurements, however, show that this approach provides poor precision in practice, as imaging results are highly sensitive to device positioning errors that translate into phase errors. We address this challenge by proposing a new 60GHz imaging algorithm, {\em RSS Series Analysis}, which images an object using only RSS measurements recorded along the device’s trajectory. In addition to object location, our algorithm can discover a rich set of object surface properties at high precision, including object surface orientation, curvature, boundaries, and surface material. We tested our system on a variety of common household objects (between 5cm–30cm in width). Results show that it achieves high accuracy (cm level) in a variety of dimensions, and is highly robust against noises in device position and trajectory tracking. We believe that this is the first practical mobile imaging system (re)using 60GHz networking devices, and provides a basic primitive towards the construction of detailed environmental mapping systems.
Keywords: 60GHz, RF imaging, environmental mapping, mobile radar (ID#: 15-6899)
URL: http://doi.acm.org/10.1145/2789168.2790112

 

Davide Pesavento, Giulio Grassi, Giovanni Pau, Paramvir Bahl, Serge Fdida; “Car-Fi: Opportunistic V2I by Exploiting Dual-Access Wi-Fi Networks,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 173–175. doi:10.1145/2789168.2789171
Abstract: The need for Internet access from moving vehicles has been steadily increasing in the past few years. Solutions that rely on cellular connectivity are becoming impractical to deploy due to technical and economic reasons. Car-Fi proposes an approach that leverages existing home Wi-Fi access points configured in dual-access mode, in order to offload all data traffic from the congested and expensive cellular infrastructure to whatever Wi-Fi network is available. Thanks to an improved scanning algorithm and numerous optimizations to the connection setup, Car-Fi makes downloading large amounts of data from a moving car feasible.
Keywords: 802.11, V2I, fast roaming, scanning, vehicular networks (ID#: 15-6900)
URL: http://doi.acm.org/10.1145/2789168.2789171

 

Gaetan Harter, Roger Pissard-Gibollet, Frederic Saint-Marcel, Guillaume Schreiner, Julien Vandaele; “FIT IoT-LABA: Large Scale Open Experimental IoT Testbed,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 176–178. doi:10.1145/2789168.2789172
Abstract: FIT IoT-LAB’s goal is to provide a very large scale open experimental testbed for the Internet of Things, by deploying more than 2700 experimentation nodes over 6 sites in France. Our demonstration purpose is to illustrate what the IoT-LAB platform offers through small applications involving radio communications and mobile nodes. Thanks to these examples, we will show how to run an experiment in the testbed and some of the tools it provides to help in developing, tuning and monitoring such large-scale applications.
Keywords: internet of things, testbed, wireless sensor network (ID#: 15-6901)
URL: http://doi.acm.org/10.1145/2789168.2789172

 

Loïc Baron, Fadwa Boubekeur, Radomir Klacza, Mohammed Yasin Rahman, Ciro Scognamiglio, Nina Kurose, Timur Friedman, Serge Fdida;  “OneLab: Major Computer Networking Testbeds for IoT and Wireless Experimentation,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 199–200. doi:10.1145/2789168.2789180
Abstract: Gathering the required measurements to produce accurate results for mobile communications and wireless networking protocols, technologies and applications, relies on the use of expensive experimental computer networking facilities. Until very recently, large-scale testbed facilities have existed in separate silos, each with its own authentication mechanisms and experiment support tools. There lacked a viable federation model that reconciled the challenges posed by how to provide a single entry point to access heterogeneous and distributed resources, and how to federate these resources that are under the control of multiple authorities. The OneLab experimental facility, which came online in 2014, realizes this model, making a set of world-class testbeds freely available to researchers through a unique credential for each user and a common set of tools. We allow users to deploy innovative experiments across our federated platforms that include the embedded object testbeds of FIT IoT-Lab, the cognitive radio testbed of FIT CorteXlab, the wireless testbeds of NITOS-Lab, and the internet overlay testbed PlanetLab Europe (PLE), which together provide thousands of nodes for experimentation. Also federated under OneLab are the FUSECO Playground, which includes cloud, M2M, SDN, and mobile broadband; w-iLab.t wireless facilities; and the Virtual Wall testbed of wired networks and applications. Our demo describes the resources offered by the OneLab platforms, and illustrates how any member of the MobiCom community can create an account and start using these platforms today to deploy experiments for mobile and wireless testing.
Keywords: experimental facility, heterogeneous testbed federation, myslice, slice-based federation architecture, unique credential (ID#: 15-6902)
URL: http://doi.acm.org/10.1145/2789168.2789180

 

Georgios Z. Papadopoulos, Antoine Gallais, Guillaume Schreiner, Thomas Noël; “Live Adaptations of Low-power MAC Protocols,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 207–209. doi:10.1145/2789168.2789184
Abstract: This demonstration aims at observing in an interactive manner the impact of modification of preamble and sampling periods at the low-power family of MAC protocols, and thus, illustrating in real-time the energy consumption and delay performance of each node accordingly. To do so, we implemented the ability for users to generate traffic at some remote nodes that are involved in two distinct deployed topologies. Those deployed networks operate with either a statically configured network, by employing X-MAC on top of the Contiki OS, or T-AAD, a lightweight traffic auto-adaptive protocol that allows live and automatic modifications of duty-cycle configurations.
Keywords: MAC layer, bursty traffic, low-power protocols, traffic adaptivity, wireless sensor network (ID#: 15-6903)
URL: http://doi.acm.org/10.1145/2789168.2789184

 

Matteo Pozza, Claudio Enrico Palazzi, Armir Bujari; “Poster: Mobile Data Offloading Testbed,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 212–214. doi:10.1145/2789168.2795159
Abstract: Recent research has proposed swarming protocols as a possible approach to offload the Internet infrastructure when some content can be shared by several users. However, simulations have been generally used as experimental means. Instead, we present an application platform that allows a rapid development and testing of swarming protocols using off-the-shelf smartphones.
Keywords: data offload, mobile, testbed, wireless (ID#: 15-6904)
URL: http://doi.acm.org/10.1145/2789168.2795159

 

Yanzhi Dou, Kexiong (Curtis) Zeng, Yaling Yang; “Poster: Privacy-Preserving Server-Driven Dynamic Spectrum Access System,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 218–220. doi:10.1145/2789168.2795161
Abstract: Dynamic spectrum access (DSA) technique has been widely accepted as a crucial solution to mitigate the potential spectrum scarcity problem. As a key form of DSA, government is proposing to release more federal spectrum for sharing with commercial wireless users. However, the flourish of federal-commercial sharing hinges upon how privacy issues are managed. In current DSA proposals, the sensitive operation parameters of both federal incumbent users (IUs) and commercial secondary users (SUs) need to be shared with the dynamic spectrum access system (SAS) to realize efficient spectrum allocation. Since SAS is not necessarily operated by a trusted third party, the current proposals dissatisfy the privacy requirement of both IUs and SUs. To address the privacy issues, this paper presents a privacy-preserving SAS design, which realizes the complex spectrum allocation decision process of DSA through secure computation over ciphertext based on homomorphic encryption, thus none of the IU or SU operation parameters are exposed to SAS.
Keywords: homomorphic encryption, privacy, server-driven dsa (ID#: 15-6905)
URL: http://doi.acm.org/10.1145/2789168.2795161

 

Tan Zhang, Aakanksha Chowdhery, Paramvir (Victor) Bahl, Kyle Jamieson, Suman Banerjee; “The Design and Implementation of a Wireless Video Surveillance System,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 426–438. doi:10.1145/2789168.2790123
Abstract: Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil’s bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more objects relevant to a user’s query.
Keywords: edge computing, video surveillance, wireless (ID#: 15-6906)
URL: http://doi.acm.org/10.1145/2789168.2790123

 

Puneet Jain, Justin Manweiler, Romit Roy Choudhury; “Poster: User Location Fingerprinting at Scale,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 260–262. doi:10.1145/2789168.2795175
Abstract: Many emerging mobile computing applications are continuous vision based. The primary challenge these applications face is computation partitioning between the phone and cloud. The indoor location information is one metadata that can help these applications in making this decision. In this extended-abstract, we propose a vision based scheme to uniquely fingerprint an environment which can in turn be used to identify user’s location from the uploaded visual features. Our approach takes into account that the opportunity to identify location is fleeting and the phones are resource constrained — therefore minimal yet sufficient computation needs to be performed to make the offloading decision. Our work aims to achieve near real-time performance while scaling to buildings of arbitrary sizes. The current work is in preliminary stages but holds promise for the future — may apply to many applications in this area.
Keywords: cloud offloading, continuous vision, localization (ID#: 15-6907)
URL: http://doi.acm.org/10.1145/2789168.2795175

 

Hossein Shafagh, Anwar Hithnawi, Andreas Droescher, Simon Duquennoy, Wen Hu; “Poster: Towards Encrypted Query Processing for the Internet of Things,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 251–253. doi:10.1145/2789168.2795172
Abstract: The Internet of Things (IoT) is envisioned to digitize the physical world, resulting in a digital representation of our proximate living space. The possibility of inferring privacy violating information from IoT data necessitates adequate security measures regarding data storage and communication. To address these privacy and security concerns, we introduce our system that stores IoT data securely in the Cloud database while still allowing query processing over the encrypted data. We enable this by encrypting IoT data with a set of cryptographic schemes such as order-preserving and partially homomorphic encryptions. To achieve this on resource-limited devices, our system relies on optimized algorithms that accelerate partial homomorphic and order-preserving encryptions by 1 to 2 orders of magnitude. Our early results show the feasibility of our system on low-power devices. We envision our system as an enabler of secure IoT applications.
Keywords: computing on encrypted data, data security, encrypted computing, internet of things, system design (ID#: 15-6908)
URL: http://doi.acm.org/10.1145/2789168.2795172

 

Mohammad A. Hoque, Kasperi Saarikoski, Eemil Lagerspetz, Julien Mineraud, Sasu Tarkoma; “Poster: VPN Tunnels for Energy Efficient Multimedia Streaming,” MobiCom '15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 239–241. doi:10.1145/2789168.2795168
Abstract: Minimizing the energy consumption of mobile devices for wireless network access is important. In this article, we analyze the energy efficiency of a new set of applications which use Virtual Private Network (VPN) tunnels for secure communication. First, we discuss the energy efficiency of a number of VPN applications from a large scale deployment of 500 K devices. We next measure the energy consumption of some of these applications with different use cases. Finally, we demonstrate that a VPN tunnel can be instrumented for enhanced energy efficiency with multimedia streaming applications. Our results indicate energy savings of 40% for this class of applications.
Keywords: energy consumption, multimedia streaming, traffic scheduling, virtual private network (ID#: 15-6909)
URL: http://doi.acm.org/10.1145/2789168.2795168
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: MobiHoc 2015, China

 

 
SoS Logo

International Conferences: 

Mobile Ad Hoc Networking and Computing 2015

China


The ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc ’15) addressed wireless networking and computing. It included the Workshop on Privacy-Aware Mobile Computing (PAMCO) and was held at Hangzhou, China, June 22–25, 2015. Topics offered included foundations for privacy-aware mobile computing—e.g., key exchange, distribution and management, location privacy, privacy-preserving data collection, privacy-preserving data aggregation and analytics, privacy issues in wearable computing, data analysis on traffic logs, privacy issues in cellular networks, privacy issues in body-area networks, emerging privacy threats from mobile apps, privacy issues in near-field communication (NFC), Bluetooth security and privacy, secure and privacy-preserving cooperation, jamming and counter measures, and capacity and security analysis of covert channels.  



Qinggang Yue, Zhen Ling, Wei Yu, Benyuan Liu, Xinwen Fu; “Blind Recognition of Text Input on Mobile Devices via Natural Language Processing,” PAMCO ’15 Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing, June 2015, Pages 19-24. doi:10.1145/2757302.2757304
Abstract: In this paper, we investigate how to retrieve meaningful English text input on mobile devices from recorded videos while the text is illegible in the videos. In our previous work, we were able to retrieve random passwords with high success rate at a certain distance. When the distance increases, the success rate of recovering passwords decreases. However, if the input is meaningful text such as email messages, we can further increase the success rate via natural language processing techniques since the text follows spelling and grammar rules and is context sensitive. The process of retrieving the text from videos can be modeled as noisy channels. We first derive candidate words for each word of the input sentence, model the whole sentence with a Hidden Markov model and then apply the trigram language model to derive the original sentence. Our experiments validate our technique of retrieving meaningful English text input on mobile devices from recorded videos.
Keywords: computer vision, mobile security, natural language processing (ID#: 15-6858)
URL: http://doi.acm.org/10.1145/2757302.2757304

 

Maya Larson, Chunqiang Hu, Ruinian Li, Wei Li, Xiuzhen Cheng; “Secure Auctions without an Auctioneer via Verifiable Secret Sharing,” in PAMCO ’15 Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing, June 2015, Pages 1-6. doi:10.1145/2757302.2757305
Abstract: Combinatorial auctions are a research hot spot. They impact people’s daily lives in many applications such as spectrum auctions held by the FCC. In such auctions, bidders may want to submit bids for combinations of goods. The challenge is how to protect the privacy of bidding prices and ensure data security in these auctions?  To tackle this challenge, we present an approach based on verifiable secret sharing. The approach is to represent the price in the degree of a polynomial; thus the maximum/sum of the degree of two polynomials can be obtained by the degree of the sum/product of the two polynomials based on secret sharing. This protocol hides the information of bidders (bidding price) from the auction servers. The auctioneers can obtain their secret shares from bidders without a secure channel. Since it doesn’t need a secure channel, this scheme is more practical and applicable to more scenarios. This scheme provides resistance to collusion attacks, conspiracy attacks, passive attacks and so on. Compared to [11, 12], our proposed scheme provides authentication without increasing the communication cost.
Keywords: (not provided) (ID#: 15-6859)
URL: http://doi.acm.org/10.1145/2757302.2757305

 

Tong Yan, Yachao Lu, Nan Zhang; “Privacy Disclosure from Wearable Devices,” in PAMCO ’15 Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing, June 2015, Pages 13–18. doi:10.1145/2757302.2757306
Abstract: In recent years, wearable devices have seen an explosive growth of popularity and a rapid enhancement of functionalities. Current off-the-shelf wearable devices offer pack sensors such as pedometer, gyroscope, accelerometer, altimeter, compass, GPS, and heart rate monitor. These sensors work together to quietly monitor various aspects of a user’s daily life, enabling a wide spectrum of health- and social-related applications. Nevertheless, the data collected by such sensors, even in their aggregated form, may cause significant privacy concerns if shared with third-party applications and/or a user’s social connections (as many wearable platforms now support). This paper studies a novel problem of the potential inference of sensitive user behavior from seemingly insensitive sensor outputs. Specifically, we examine whether it is possible to infer the behavioral sequence of a user such as moving from one place to another, visiting a coffee shop, grocery shopping, etc., based on the outputs of pedometer sensors (aggregated over certain time intervals, e.g., 1 minute). We demonstrate through real-world experiments that it is often possible to infer such behavior with a high success probability, raising privacy concerns on the sharing of such information as currently supported by various wearable devices.
Keywords: data mining, information retrieval, privacy, time series, wearable devices (ID#: 15-6860)
URL: http://doi.acm.org/10.1145/2757302.2757306

 

Zhongli Liu, Zupei Li, Benyuan Liu, Xinwen Fu, Ioannis Raptis, Kui Ren; “Rise of Mini-Drones: Applications and Issues,” in PAMCO ’15 Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing, June 2015, Pages 7–12. doi:10.1145/2757302.2757303
Abstract: Miniature (mini) drones are enjoying increasing attention. They have a broad market and applications. However, a powerful technology often has two ethical sides. Miniature drones can be abused, rendering security and privacy concerns. The contribution of this paper is two-fold. First, we will perform a survey of mini-drones on market and compare their specifications such as flight time, maximum payload weight, and price, and regulations and issues of operating mini-drones. Second, we propose novel aerial localization strategies and compare six different localization strategies for a thorough study of aerial localization by a single drone.
Keywords: (not provided) (ID#: 15-6861)
URL: http://doi.acm.org/10.1145/2757302.2757303

 

Xinwen Fu, Nan Zhang, Program Chairs; “Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing,” PAMCO ’15 at MobiHoc ’15, Hangzhou, China, June 22–25, 2015, ACM, New York, NY, 2015. ISBN: 978-1-4503-3523-2
Abstract: It is our great pleasure to welcome you to the 2015 ACM MobiHoc Workshop on Privacy-Aware Mobile Computing–PAMCO’15. This is the first year of this workshop, which aims to bring together researchers from mobile computing and security/privacy communities to discuss topics related to the protection of privacy in mobile computing, including both theoretical studies and implementation/experimentations papers, especially analysis of privacy threats from emerging applications in mobile environments — e.g., location-based services, mobile apps, wearable computing, etc.
Keywords: (not provided) (ID#: 15-6862)
URL: http://dl.acm.org/citation.cfm?id=2757302&coll=DL&dl=GUIDE&CFID=713685223&CFTOKEN=18305797

 

Shanhe Yi, Cheng Li, Qun Li; “A Survey of Fog Computing: Concepts, Applications and Issues,” in Mobidata ’15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 37–42. doi:10.1145/2757384.2757397
Abstract: Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.
Keywords: cloud computing, edge computing, fog computing, mobile cloud computing, mobile edge computing, review
(ID#: 15-6863)
URL:  http://doi.acm.org/10.1145/2757384.2757397

 

Jian Liu, Yan Wang, Yingying Chen, Jie Yang, Xu Chen, Jerry Cheng; “Tracking Vital Signs During Sleep Leveraging
Off-the-Shelf WiFi,”
in Mobidata ’15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 267–276. doi:10.1145/2746285.2746303
Abstract: Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.
Keywords: channel state information (csi), sleep monitoring, vital signs, wifi (ID#: 15-6864)
URL: http://doi.acm.org/10.1145/2746285.2746303

 

Zhongli Liu, Zupei Li, Benyuan Liu, Xinwen Fu, Ioannis Raptis, Kui Ren; “Rise of Mini-Drones: Applications and Issues,” in PAMCO ’15 Proceedings of the 2015 Workshop on Privacy-Aware Mobile Computing, June 2015, Pages 7–12.  doi:10.1145/2757302.2757303
Abstract: Miniature (mini) drones are enjoying increasing attention. They have a broad market and applications. However, a powerful technology often has two ethical sides. Miniature drones can be abused, rendering security and privacy concerns. The contribution of this paper is two-fold. First, we will perform a survey of mini-drones on market and compare their specifications such as flight time, maximum payload weight, and price, and regulations and issues of operating mini-drones. Second, we propose novel aerial localization strategies and compare six different localization strategies for a thorough study of aerial localization by a single drone.
Keywords: (not provided) (ID#: 15-6865)
URL: http://doi.acm.org/10.1145/2757302.2757303

 

Yu Cao, Peng Hou, Donald Brown, Jie Wang, Songqing Chen; “Distributed Analytics and Edge Intelligence: Pervasive Health Monitoring at the Era of Fog Computing,” in Mobidata ’15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 43–48. doi:10.1145/2757384.2757398
Abstract: Biomedical research and clinical practice are entering a data-driven era. One of the major applications of biomedical big data research is to utilize inexpensive and unobtrusive mobile biomedical sensors and cloud computing for pervasive health monitoring. However, real-world user experiences with mobile cloud-based health monitoring were poor, due to the factors such as excessive networking latency and longer response time. On the other hand, fog computing, a newly proposed computing paradigm, utilizes a collaborative multitude of end-user clients or near-user edge devices to conduct a substantial amount of computing, storage, communication, and etc. This new computing paradigm, if successfully applied for pervasive health monitoring, has great potential to accelerate the discovery of early predictors and novel biomarkers to support smart care decision making in a connected health scenarios. In this paper, we employ a real-world pervasive health monitoring application (pervasive fall detection for stroke mitigation) to demonstrate the effectiveness and efficacy of fog computing paradigm in health monitoring. Fall is a major source of morbidity and mortality among stroke patients. Hence, detecting falls automatically and in a timely manner becomes crucial for stroke mitigation in daily life. In this paper, we set to (1) investigate and develop new fall detection algorithms and (2) design and employ a real-time fall detection system employing fog computing paradigm (e.g., distributed analytics and edge intelligence), which split the detection task between the edge devices (e.g., smartphones attached to the user) and the server (e.g., servers in the cloud). Experimental results show that distributed analytics and edge intelligence, supported by fog computing paradigm, are very promising solutions for pervasive health monitoring.
Keywords: distributed analytics, edge intelligence, fog computing, mobile computing, pervasive health monitoring (ID#: 15-6866)
URL:  http://doi.acm.org/10.1145/2757384.2757398

 

Jiajia Liu, Nei Kato; “Device-to-Device Communication Overlaying Two-Hop Multi-Channel Uplink Cellular Networks,” in MobiHoc ’15 Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, June 2015, Pages 307–316. doi:10.1145/2746285.2746311
Abstract: Different from previous works, in this paper, we adopt D2D communication as a routing extension to traditional cellular uplinks thus enabling a two-hop route between a user and the serving BS via a D2D relay. Specifically, a BS establishes a cellular link with a mobile user only if the pilot signal strength received from the user is above a specified threshold; otherwise, the user may establish a D2D link with a neighboring user and connect to a nearby BS in a two-hop manner. We present a stochastic geometry based framework to analyze the coverage probability and average rate in such a two-hop multi-channel uplink cellular network where mobile users adopt the fractional channel inversion power control with maximum transmit power limit. As validated by extensive numerical results, the developed framework enables network designers to efficiently determine the optimal control parameters so as to achieve the optimum system performance. Our results show that employing D2D link based two-hop connection can significantly improve both the network coverage and average rate for uplink traffic.
Keywords: device-to-device communication, fractional power control, multi-channel cellular network, stochastic geometry, uplink (ID#: 15-6867)
URL: http://doi.acm.org/10.1145/2746285.2746311

 

Xi Xiong, Zheng Yang, Longfei Shangguan, Yun Fei, Milos Stojmenovic, Yunhao Liu; “SmartGuide: Towards Single-Image Building Localization with Smartphone,” in MobiHoc ’15 Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, June 2015, Pages 117–126. doi:10.1145/2746285.2746294
Abstract: We introduce SmartGuide, a light-weighted and efficient approach to localize and recognize a distant unknown building. Our approach relies on shooting only a single photo of a target building via a smartphone and a local 2D Google map. SmartGuide first extracts a partial top view contour of a building from its side-view photo by applying vanishing point and the Manhattan World Assumption, and then fetches a candidate building set from a local 2D Google map based on smartphone’s GPS readings. Partial top view shape, orientation and distance relative to the camera are used as input parameters in a probability model, which adversely recognizes the best candidate building in the local map. Our model is developed based on kernel density estimation that helps reduce noise in the smartphone sensors, such as GPS readings and camera ray direction reported by noisy accelerometer and compass. Experimental results demonstrate that our approach recognizes buildings ranging from 20m to 520m and achieves 92.7% accuracy in downtown areas where the Manhattan World Assumption is applicable. In addition, the processing time is no more than 6 seconds for 87% of cases. Compared with existing building localization schemes, SmartGuide offers numerous advantages. Our method avoids taking multiple photos, intricate 3D reconstruction or any initial deployment cost of database construction, making it faster and less labor-intensive than existing solutions.
Keywords: building localization, mobile computing, single image, smartphone (ID#: 15-6868)
URL: http://doi.acm.org/10.1145/2746285.2746294

 

Muyuan Li, Haojin Zhu, Zhaoyu Gao, Si Chen, Le Yu, Shangqian Hu, Kui Ren; “All Your Location Are Belong to Us: Breaking Mobile Social Networks for Automated User Location Tracking,” in MobiHoc ’14 Proceedings of the 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing, August 2014, Pages 43–52. doi:10.1145/2632951.2632953
Abstract: Location-based social networks (LBSNs) feature friend discovery by location proximity that has attracted hundreds of millions of users world-wide. While leading LBSN providers claim the well-protection of their users’ location privacy, for the first time we show through real world attacks that these claims do not hold. In our identified attacks, a malicious individual with the capability of no more than a regular LBSN user can easily break most LBSNs by manipulating location information fed to LBSN client apps and running them as location oracles. We further develop an automated user location tracking system and test it on leading LBSNs including Wechat, Skout, and Momo. We demonstrate its effectiveness and efficiency via a 3 week real-world experiment on 30 volunteers and show that we could geo-locate any target with high accuracy and readily recover his/her top 5 locations. Finally, we also develop a framework that explores a grid reference system and location classifications to mitigate the attacks. Our result serves as a critical security reminder of the current LBSNs pertaining to a vast number of users
Keywords: location privacy, mobile social network (ID#: 15-6869)
URL: http://doi.acm.org/10.1145/2632951.2632953

 

Haiming Jin, Lu Su, Danyang Chen, Klara Nahrstedt, Jinhui Xu; “Quality of Information Aware Incentive Mechanisms for Mobile Crowd Sensing Systems,” in MobiHoc ’15 Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, June 2015, Pages 167–176. doi:10.1145/2746285.2746310
Abstract: Recent years have witnessed the emergence of mobile crowd sensing (MCS) systems, which leverage the public crowd equipped with various mobile devices for large scale sensing tasks. In this paper, we study a critical problem in MCS systems, namely, incentivizing user participation. Different from existing work, we incorporate a crucial metric, called users’ quality of information (QoI), into our incentive mechanisms for MCS systems. Due to various factors (e.g., sensor quality, noise, etc.) the quality of the sensory data contributed by individual users varies significantly. Obtaining high quality data with little expense is always the ideal of MCS platforms. Technically, we design incentive mechanisms based on reverse combinatorial auctions. We investigate both the single-minded and multi-minded combinatorial auction models. For the former, we design a truthful, individual rational and computationally efficient mechanism that approximately maximizes the social welfare with a guaranteed approximation ratio. For the latter, we design an iterative descending mechanism that achieves close-to-optimal social welfare while satisfying individual rationality and computational efficiency. Through extensive simulations, we validate our theoretical analysis about the close-to-optimal social welfare and fast running time of our mechanisms.
Keywords: crowd sensing, incentive mechanism, quality of information (ID#: 15-6870)
URL:  http://doi.acm.org/10.1145/2746285.2746310

 

Divya Saxena, Vaskar Raychoudhury, Nalluri SriMahathi; “SmartHealth-NDNoT: Named Data Network of Things for Healthcare Services,” in MobileHealth ’15 Proceedings of the 2015 Workshop on Pervasive Wireless Healthcare, June 2015, Pages 45–50. doi:10.1145/2757290.2757300
Abstract: In recent years, healthcare sector has emerged as a major application area of Internet-of-Things (IoT). IoT aims to automate healthcare services through remote monitoring of patients using several vital sign sensors. Remotely collected patient records are then conveyed to the hospital servers through the user’s smartphones. Healthcare IoT can thus reduce a lot of overhead while allowing people to access healthcare services all the time and everywhere. However, healthcare IoT exchanges data over the IP-centric Internet which has vulnerabilities related to security, privacy, and mobility. Those features are added to the Internet as external add-ons. In order to solve this problem, in this paper, we propose to use Named Data Networking (NDN), which is a future Internet paradigm based on Content-Centric Networking (CCN). NDN has in-built support for user mobility which is well-suited for mobile patients and caregivers. NDN also ensures data security instead of channel security earlier provided by the Internet. In this paper, we have developed NDNoT, which is an IoT solution for smart mobile healthcare using NDN. Our proof-of-concept prototype shows the usability of our proposed architecture.
Keywords: healthcare, internet of things (iot), named data networking (ndn), ndnot, open mhealth architecture (ID#: 15-6871)
URL:  http://doi.acm.org/10.1145/2757290.2757300


Note:

Articles listed on these Pages have been found on publicly available internet Pages and are cited with links to those Pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: SACMAT 2015, Vienna

 

 
SoS Logo

International Conferences:

Access Control Models and Technologies 2015

Vienna


The 20th ACM Symposium on Access Control Models and Technologies (SACMAT) was held June 1–3, 2015 in Vienna, Austria. The aims of the symposium were to share novel access control solutions that fulfil the needs of heterogeneous applications and environments, and to identify new directions for future research and development. The editors deem works cited here useful to the Science of Security community. 



Lionel Montrieux, Zhenjiang Hu; “Towards Attribute-Based Authorisation for Bidirectional Programming,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 185–196. doi:10.1145/2752952.2752963
Abstract: Bidirectional programming allows developers to write programs that will produce transformations that extract data from a source document into a view. The same transformations can then be used to update the source in order to propagate the changes made to the view, provided that the transformations satisfy two essential properties.  Bidirectional transformations can provide a form of authorisation mechanism. From a source containing sensitive data, a view can be extracted that only contains the information to be shared with a subject. The subject can modify the view, and the source can be updated accordingly, without risk of release of the sensitive information to the subject. However, the authorisation model afforded by bidirectional transformations is limited. Implementing an attribute-based access control (ABAC) mechanism directly in bidirectional transformations would violate the essential properties of well-behaved transformations; it would contradict the principle of separation of concerns; and it would require users to write and maintain a different transformation for every subject they would like to share a view with.  In this paper, we explore a solution to enforce ABAC on bidirectional transformations, using a policy language from which filters are generated to enforce the policy rules.
Keywords: access control, authorization, bidirectional transformation (ID#: 15-6910)
URL: http://doi.acm.org/10.1145/2752952.2752963

 

Jun Zhu, Bill Chu, Heather Lipford, Tyler Thomas; “Mitigating Access Control Vulnerabilities through Interactive Static Analysis,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 199–209. doi:10.1145/2752952.2752976
Abstract: Access control vulnerabilities due to programming errors have consistently ranked amongst top software vulnerabilities. Previous research efforts have concentrated on using automatic program analysis techniques to detect access control vulnerabilities in applications. We report a comparative study of six open source PHP applications, and find that implicit assumptions of previous research techniques can significantly limit their effectiveness. We propose a more effective hybrid approach to mitigate access control vulnerabilities. Developers are reminded in-situ of potential access control vulnerabilities, where self-review of code can help them discover mistakes. Additionally, developers are prompted for application-specific access control knowledge, providing samples of code that could be thought of as static analysis by example. These examples are turned into code patterns that can be used in performing static analysis to detect additional access control vulnerabilities and alert the developer to take corrective actions. Our evaluation of six open source applications detected 20 zero-day access control vulnerabilities in addition to finding all access control vulnerabilities detected in previous works.
Keywords: access control vulnerability, secure programming, static analysis (ID#: 15-6911)
URL: http://doi.acm.org/10.1145/2752952.2752976

 

Claudio Soriente, Ghassan O. Karame, Hubert Ritzdorf, Srdjan Marinovic, Srdjan Capkun; “Commune: Shared Ownership in an Agnostic Cloud,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 39–50. doi:10.1145/2752952.2752972
Abstract: Cloud storage platforms promise a convenient way for users to share files and engage in collaborations, yet they require all files to have a single owner who unilaterally makes access control decisions. Existing clouds are, thus, agnostic to shared ownership. This can be a significant limitation in many collaborations because, for example, one owner can delete files and revoke access without consulting the other collaborators.  In this paper, we first formally define a notion of shared ownership within a file access control model. We then propose a solution, called Commune, to the problem of distributed enforcement of shared ownership in agnostic clouds, so that access grants require the support of an agreed threshold of owners. Commune can be used in existing clouds without modifications to the platforms. We analyze the security of our solution and evaluate its performance through an implementation integrated with Amazon S3.
Keywords: cloud security, distributed enforcement, shared ownership (ID#: 15-6912)
URL: http://doi.acm.org/10.1145/2752952.2752972

 

Jingwei Li, Anna Squicciarini, Dan Lin, Shuang Liang, Chunfu Jia; “SecLoc: Securing Location-Sensitive Storage in the Cloud,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 51–61. doi:10.1145/2752952.2752965
Abstract: Cloud computing offers a wide array of storage services. While enjoying the benefits of flexibility, scalability and reliability brought by the cloud storage, cloud users also face the risk of losing control of their own data, in partly because they do not know where their data is actually stored. This raises a number of security and privacy concerns regarding one’s sensitive data such as health records. For example, according to Canadian laws, data related to personal identifiable information must be stored within Canada. Nevertheless, in contrast to the urgent demands, privacy requirements regarding to cloud storage locations have not been well investigated in the current cloud computing market, fostering security and privacy concerns among potential adopters. Aiming at addressing this emerging critical issue, we propose a novel secure location-sensitive storage framework, called SecLoc, which offers protection for cloud users’ data following the storage location restrictions, with minimum management overhead to existing cloud storage services. We conduct security analysis, complexity analysis and experimental evaluation on the proposed SecLoc system. Our results demonstrate both effectiveness and efficiency of our mechanism.
Keywords: access control, attribute-based encryption, cloud storage, location sensitive (ID#: 15-6913)
URL: http://doi.acm.org/10.1145/2752952.2752965

 

Zeqing Guo, Weili Han, Liangxing Liu, Wenyuan Xu, Ruiqi Bu, Minyue Ni; “SPA: Inviting Your Friends to Help Set Android Apps,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 221–231. doi:10.1145/2752952.2752974
Abstract: More and more powerful personal smart devices take users, especially the elder, into a disaster of policy administration where users are forced to set personal management policies in these devices. Considering a real case of this issue in the Android security, it is hard for users, even some programmers, to generally identify malicious permission requests when they install a third-party application. Motivated by the popularity of mutual assistance among friends (including family members) in the real world, we propose a novel framework for policy administration, referring to Socialized Policy Administration (SPA for short), to help users manage the policies in widely deployed personal devices. SPA leverages a basic idea that a user may invite his or her friends to help set the applications. Especially, when the size of invited friends increases, the setting result can be more resilient to a few malicious or unprofessional friends. We define the security properties of SPA, and propose an enforcement framework where users’ friends can help users set applications without the leakage of friends’ preferences with the supports of a privacy preserving mechanism. In our prototype, we only leverage partially homomorphic encryption cryptosystems to implement our framework, because the fully homomorphic encryption is not acceptable to be deployed in a practical service at the moment. Based on our prototype and performance evaluation, SPA is promising to support major types of policies in current popular applications with acceptable performance.
Keywords: android, policy administration, policy based management, social computing, socialized policy administration
(ID#: 15-6914)
URL: http://doi.acm.org/10.1145/2752952.2752974

 

Carlos E. Rubio-Medrano, Ziming Zhao, Adam Doupe, Gail-Joon Ahn; “Federated Access Management for Collaborative Network Environments: Framework and Case Study,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 125–134. doi:10.1145/2752952.2752977
Abstract: With the advent of various collaborative sharing mechanisms such as Grids, P2P and Clouds, organizations including private and public sectors have recognized the benefits of being involved in inter-organizational, multi-disciplinary, and collaborative projects that may require diverse resources to be shared among participants. In particular, an environment that often makes use of a group of high-performance network facilities would involve large-scale collaborative projects and tremendously seek a robust and flexible access control for allowing collaborators to leverage and consume resources, e.g., computing power and bandwidth. In this paper, we propose a federated access management scheme that leverages the notion of attributes. Our approach allows resource-sharing organizations to provide distributed provisioning (publication, location, communication, and evaluation) of both attributes and policies for federated access management purposes. Also, we provide a proof-of-concept implementation that leverages distributed hash tables (DHT) to traverse chains of attributes and effectively handle the federated access management requirements devised for inter-organizational resource sharing and collaborations.
Keywords: (not provided) (ID#: 15-6915)
URL: http://doi.acm.org/10.1145/2752952.2752977

 

Ha Thanh Le, Cu Duy Nguyen, Lionel Briand, Benjamin Hourte; “Automated Inference of Access Control Policies for Web Applications,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 27–37. doi:10.1145/2752952.2752969
Abstract: In this paper, we present a novel, semi-automated approach to infer access control policies automatically for web-based applications. Our goal is to support the validation of implemented access control policies, even when they have not been clearly specified or documented. We use role-based access control as a reference model. Built on top of a suite of security tools, our approach automatically exercises a system under test and builds access spaces for a set of known users and roles. Then, we apply a machine learning technique to infer access rules. Inconsistent rules are then analysed and fed back to the process for further testing and improvement. Finally, the inferred rules can be validated based on pre-specified rules if they exist. Otherwise, the inferred rules are presented to human experts for validation and for detecting access control issues. We have evaluated our approach on two applications; one is open source while the other is a proprietary system built by our industry partner. The obtained results are very promising in terms of the quality of inferred rules and the access control vulnerabilities it helped detect.
Keywords: access control policies, inference, machine learning (ID#: 15-6916)
URL: http://doi.acm.org/10.1145/2752952.2752969

 

Syed Zain R. Rizvi, Philip W.L. Fong, Jason Crampton, James Sellwood; “Relationship-Based Access Control for an Open-Source Medical Records System,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 113–134. doi:10.1145/2752952.2752962
Abstract: Inspired by the access control models of social network systems, Relationship-Based Access Control (ReBAC) was recently proposed as a general-purpose access control paradigm for application domains in which authorization must take into account the relationship between the access requestor and the resource owner. The healthcare domain is envisioned to be an archetypical application domain in which ReBAC is sorely needed: e.g., my patient record should be accessible only by my family doctor, but not by all doctors.  In this work, we demonstrate for the first time that ReBAC can be incorporated into a production-scale medical records system, OpenMRS, with backward compatibility to the legacy RBAC mechanism. Specifically, we extend the access control mechanism of OpenMRS to enforce ReBAC policies. Our extensions incorporate and extend advanced ReBAC features recently proposed by Crampton and Sellwood. In addition, we designed and implemented the first administrative model for ReBAC. In this paper, we describe our ReBAC implementation, discuss the system engineering lessons learnt as a result, and evaluate the experimental work we have undertaken. In particular, we compare the performance of the various authorization schemes we implemented, thereby demonstrating the feasibility of ReBAC.
Keywords: administrative model, authorization graph, authorization principal, medical records system, relationship-based access control (ID#: 15-6917)
URL: http://doi.acm.org/10.1145/2752952.2752962

 

Weili Han, Yin Zhang, Zeqing Guo, Elisa Bertino; “Fine-Grained Business Data Confidentiality Control in Cross-Organizational Tracking,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 135–145. doi:10.1145/2752952.2752973
Abstract: With the support of the Internet of Things (IoT for short) technologies, tracking systems are being widely deployed in many companies and organizations in order to provide more efficient and trustworthy delivery services. Such systems usually support easy-to-use interfaces, by which users can visualize the shipping status and progress of merchandise, according to business data which are collected directly from the merchandise through sensing technologies. However, these business data may include sensitive business information, which should be strongly protected in cross-organizational scenarios. Thus, it is critical for suppliers that the disclosure of such data to unauthorized users is prevented in the context of the open environment of these tracking systems. As business data from different suppliers and organizations are usually associated together with merchandise being shipped, it is also important to support fine-grained confidentiality control. In this paper, we articulate the problem of fine-grained business data confidentiality control in IoT-enabled cross-organizational tracking systems. We then propose a fine-grained confidentiality control mechanism, referred to as xCP-ABE, to address the problem in the context of open environment. The xCP-ABE mechanism is a novel framework which makes suppliers in tracking systems able to selectively authorize specific sets of users to access their sensitive business data and satisfies the confidentiality of transmission path of goods. We develop a prototype of the xCP-ABE mechanism, and then evaluate its performance. We also carry out a brief security analysis of our proposed mechanism. Our evaluation and analysis show that our framework is an effective and efficient solution to ensure the confidentiality of business data in cross-organizational tracking systems.
Keywords: access control, ciphertext-policy attribute-based encryption (cp-abe), cross-organizational, electronic pedigree, fine-grained, internet of things (iot), tracking system (ID#: 15-6918)
URL: http://doi.acm.org/10.1145/2752952.2752973

 

Khalid Bijon, Ram Krishnan, Ravi Sandhu; ”Mitigating Multi-Tenancy Risks in IaaS Cloud Through Constraints-Driven Virtual Resource Scheduling,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 63–74. doi:10.1145/2752952.2752964
Abstract: A major concern in the adoption of cloud infrastructure-as-a-service (IaaS) arises from multi-tenancy, where multiple tenants share the underlying physical infrastructure operated by a cloud service provider. A tenant could be an enterprise in the context of a public cloud or a department within an enterprise in the context of a private cloud. Enabled by virtualization technology, the service provider is able to minimize cost by providing virtualized hardware resources such as virtual machines, virtual storage and virtual networks, as a service to multiple tenants where, for instance, a tenant’s virtual machine may be hosted in the same physical server as that of many other tenants. It is well-known that separation of execution environment provided by the hypervisors that enable virtualization technology has many limitations. In addition to inadvertent misconfigurations, a number of attacks have been demonstrated that allow unauthorized information flow between virtual machines hosted by a hypervisor on a given physical server. In this paper, we present attribute-based constraints specification and enforcement as a mechanism to mitigate such multi-tenancy risks that arise in cloud IaaS. We represent relevant properties of virtual resources (e.g., virtual machines, virtual networks, etc.) as their attributes. Conflicting attribute values are specified by the tenant or by the cloud IaaS system as appropriate. The goal is to schedule virtual resources on physical resources in a conflict-free manner. The general problem is shown to be NP-complete. We explore practical conflict specifications that can be efficiently enforced. We have implemented a prototype for virtual machine scheduling in OpenStack, a widely-used open-source cloud IaaS software, and evaluated its performance overhead, resource requirements to satisfy conflicts, and resource utilization.
Keywords: cloud iaas, constraint, multi-tenancy, virtual-resource scheduling, vm co-residency management, vm migration
(ID#: 15-6919)
URL: http://doi.acm.org/10.1145/2752952.2752964

 

David Lorenzi, Pratik Chattopadhyay, Emre Uzun, Jaideep Vaidya, Shamik Sural, Vijayalakshmi Atluri; “Generating Secure Images for CAPTCHAs Through Noise Addition,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 169–172. doi:10.1145/2752952.2753065
Abstract: As online automation, image processing and computer vision become increasingly powerful and sophisticated, methods to secure online assets from automated attacks (bots) are required. As traditional text based CAPTCHAs become more vulnerable to attacks, new methods for ensuring a user is human must be devised. To provide a solution to this problem, we aim to reduce some of the security shortcomings in an alternative style of CAPTCHA — more specifically, the image CAPTCHA. Introducing noise helps image CAPTCHAs thwart attacks from Reverse Image Search (RIS) engines and Computer Vision (CV) attacks while still retaining enough usability to allow humans to pass challenges. We present a secure image generation method based on noise addition that can be used for image CAPTCHAs, along with 4 different styles of image CAPTCHAs to demonstrate a fully functional image CAPTCHA challenge system.
Keywords: (not provided) (ID#: 15-6920)
URL: http://doi.acm.org/10.1145/2752952.2753065

 

Jason Crampton, Gregory Gutin, Daniel Karapetyan; “Valued Workflow Satisfiability Problem,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 3–13. doi:10.1145/2752952.2752961
Abstract: A workflow is a collection of steps that must be executed in some specific order to achieve an objective. A computerised workflow management system may enforce authorisation policies and constraints, thereby restricting which users can perform particular steps in a workflow. The existence of policies and constraints may mean that a workflow is unsatisfiable, in the sense that it is impossible to find an authorised user for each step in the workflow and satisfy all constraints. In this paper, we consider the problem of finding the “least bad” assignment of users to workflow steps by assigning a weight to each policy and constraint violation. To this end, we introduce a framework for associating costs with the violation of workflow policies and constraints and define the valued workflow satisfiability problem (Valued WSP), whose solution is an assignment of steps to users of minimum cost. We establish the computational complexity of Valued WSP with user-independent constraints and show that it is fixed-parameter tractable. We then describe an algorithm for solving Valued WSP with user-independent constraints and evaluate its performance, comparing it to that of an off-the-shelf mixed integer programming package.
Keywords: parameterized complexity, valued workflow satisfiability problem, workflow satisability (ID#: 15-6921)
URL: http://doi.acm.org/10.1145/2752952.2752961

 

Federica Paci, Nicola Zannone; “Preventing Information Inference in Access Control,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 87–97. doi:10.1145/2752952.2752971
Abstract: Technological innovations like social networks, personal devices and cloud computing, allow users to share and store online a huge amount of personal data. Sharing personal data online raises significant privacy concerns for users, who feel that they do not have full control over their data. A solution often proposed to alleviate users’ privacy concerns is to let them specify access control policies that reflect their privacy constraints. However, existing approaches to access control often produce policies which either are too restrictive or allow the leakage of sensitive information. In this paper, we present a novel access control model that reduces the risk of information leakage. The model relies on a data model which encodes the domain knowledge along with the semantic relations between data. We illustrate how the access control model and the reasoning over the data model can be automatically translated in XACML. We evaluate and compare our model with existing access control models with respect to its effectiveness in preventing leakage of sensitive information and efficiency in authoring policies. The evaluation shows that the proposed model allows the definition of effective access control policies that mitigate the risks of inference of sensitive data while reducing users’ effort in policy authoring compared to existing models.
Keywords: comparison study, inference control, information leakage, semantic approach, xacml (ID#: 15-6922)
URL: http://doi.acm.org/10.1145/2752952.2752971

 

Jafar Haadi Jafarian, Hassan Takabi, Hakim Touati, Ehsan Hesamifard, Mohamed Shehab; “Towards a General Framework for Optimal Role Mining: A Constraint Satisfaction Approach,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 211–220. doi:10.1145/2752952.2752975
Abstract: Role Based Access Control (RBAC) is the most widely used advanced access control model deployed in a variety of organizations. To deploy an RBAC system, one needs to first identify a complete set of roles, including permission role assignments and role user assignments. This process, known as role engineering, has been identified as one of the costliest tasks in migrating to RBAC. Since many organizations already have some form of user permission assignments defined, it makes sense to identify roles from this existing information. This process, known as role mining, has gained significant interest in recent years and numerous role mining techniques have been developed that take into account the characteristics of the core RBAC model, as well as its various extended features and each is based on a specific optimization metric. In this paper, we propose a generic approach which transforms the role mining problem into a constraint satisfaction problem. The transformation allows us to discover the optimal RBAC state based on customized optimization metrics. We also extend the RBAC model to include more context-aware and application specific constraints. These extensions broaden the applicability of the model beyond the classic role mining to include features such as permission usage, hierarchical role mining, hybrid role engineering approaches, and temporal RBAC models. We also perform experiments to show applicability and effectiveness of the proposed approach.
Keywords: access control, constraint satisfaction problem, rbac, role mining, smt solver (ID#: 15-6923)
URL: http://doi.acm.org/10.1145/2752952.2752975

 

Masoud Narouei, Hassan Takabi; “Towards an Automatic Top-down Role Engineering Approach Using Natural Language Processing Techniques,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 157–160. doi:10.1145/2752952.2752958
Abstract: Role Based Access Control (RBAC) is the most widely used model for access control due to the ease of administration as well as economic benefits it provides. In order to deploy an RBAC system, one requires to first identify a complete set of roles. This process, known as role engineering, has been identified as one of the costliest tasks in migrating to RBAC. In this paper, we propose a top-down role engineering approach and take the first steps towards using natural language processing techniques to extract policies from unrestricted natural language documents. Most organizations have high-level requirement specifications that include a set of access control policies which describes allowable operations for the system. However, it is very time consuming, labor-intensive, and error-prone to manually sift through these natural language documents to identify and extract access control policies. Our goal is to automate this process to reduce manual efforts and human errors. We apply natural language processing techniques, more specifically semantic role labeling to automatically extract access control policies from unrestricted natural language documents, define roles, and build an RBAC model. Our preliminary results are promising and by applying semantic role labeling to automatically identify predicate-argument structure, and a set of predefined rules on the extracted arguments, we were able correctly identify access control policies with a precision of 75%, recall of 88%, and F1 score of 80%.
Keywords: natural language processing, privacy policy, role based access control, role engineering, semantic role labeling
(ID#: 15-6924)
URL: http://doi.acm.org/10.1145/2752952.2752958

 

Rainer Fischer; “A Prototype to Reduce the Amount of Accessible Information,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 147–149. doi:10.1145/2752952.2752953
Abstract: Authorized insiders downloading mass data via their user interface are still a problem. In this paper a prototype to prevent mass data extractions is proposed. Access control models efficiently protect security objects but fail to define subsets of data which are narrow enough to be harmless if downloaded. Instead of controlling access to security objects the prototype limits the amount of accessible information. A heuristic approach to measures the amount of information is used. The paper describes the implementation of the prototype which is an extension of an SAP system as an example for a large enterprise information system.
Keywords: access control, data leakage protection, sap security, security policy (ID#: 15-6925)
URL: http://doi.acm.org/10.1145/2752952.2752953

 

Alessandro Armando, Silvio Ranise, Riccardo Traverso, Konrad Wrona; “A SMT-based Tool for the Analysis and Enforcement of NATO Content-based Protection and Release Policies,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 151–155. doi:10.1145/2752952.2752954
Abstract: NATO is developing a new IT infrastructure for automated information sharing between different information security domains and supporting dynamic and flexible enforcement of the need-to-know principle. In this context, the Content-based Protection and Release (CPR) model has been introduced to support the specification and enforcement of NATO access control policies. While the ability to define fine-grained security policies for a large variety of users, resources, and devices is desirable, their definition, maintenance, and enforcement can be difficult, time-consuming, and error prone. In this paper, we give an overview of a tool capable of assisting NATO security personnel in these tasks by automatically solving several policy analysis problems of practical interest. The tool levarages state-of-the-art SMT solvers.
Keywords: attribute-based access control, nato information sharing infrastructure, xacml (ID#: 15-6926)
URL: http://doi.acm.org/10.1145/2752952.2752954

 

Nima Mousavi, Mahesh Tripunitara; “Hard Instances for Verification Problems in Access Control,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 161–164. doi:10.1145/2752952.2752959
Abstract: We address the generation and analysis of hard instances for verification problems in access control that are NP-hard. Given the customary assumption that P ≠ NP, we know that such classes exist. We focus on a particular problem, the user-authorization query problem (UAQ) in Role-Based Access Control (RBAC). We show how to systematically generate hard instances for it. We then analyze what we call the structure of those hard instances. Our work brings the important aspect of systematic investigation of hard input classes to access control research.
Keywords: hard instances, intractability, role-based access control, user authorization query (ID#: 15-6927)
URL: http://doi.acm.org/10.1145/2752952.2752959

 

Jason Crampton, Charles Morisset, Nicol Zannone; “On Missing Attributes in Access Control: Non-deterministic and Probabilistic Attribute Retrieval,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 99–109. doi:10.1145/2752952.2752970
Abstract: Attribute Based Access Control (ABAC) is becoming the reference model for the specification and evaluation of access control policies. In ABAC policies and access requests are defined in terms of pairs attribute names/values. The applicability of an ABAC policy to a request is determined by matching the attributes in the request with the attributes in the policy. Some languages supporting ABAC, such as PTaCL or XACML 3.0, take into account the possibility that some attributes values might not be correctly retrieved when the request is evaluated, and use complex decisions, usually describing all possible evaluation outcomes, to account for missing attributes.  In this paper, we argue that the problem of missing attributes in ABAC can be seen as a non-deterministic attribute retrieval process, and we show that the current evaluation mechanism in PTaCL or XACML can return a complex decision that does not necessarily match with the actual possible outcomes. This, however, is problematic for the enforcing mechanism, which needs to resolve the complex decision into a conclusive one. We propose a new evaluation mechanism, explicitly based on non-deterministic attribute retrieval for a given request. We extend this mechanism to probabilistic attribute retrieval and implement a probabilistic policy evaluation mechanism for PTaCL in PRISM, a probabilistic model-checker.
Keywords: missing attribute, policy evaluation, probabilistic model-checking, ptacl (ID#: 15-6928)
URL: http://doi.acm.org/10.1145/2752952.2752970

 

Marcos Cramer, Jun Pang, Yang Zhang; “A Logical Approach to Restricting Access in Online Social Networks,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 75–86. doi:10.1145/2752952.2752967
Abstract: Nowadays in popular online social networks users can blacklist some of their friends in order to disallow them to access resources that other non-blacklisted friends may access. We identify three independent binary decisions to utilize users’ blacklists in access control policies, resulting into eight access restrictions. We formally define these restrictions in a hybrid logic for relationship-based access control, and provide syntactical transformations to rewrite a hybrid logic access control formula when fixing an access restriction. This enables a flexible and user-friendly approach for restricting access in social networks. We develop efficient algorithms for enforcing a subset of access control policies with restrictions. The effectiveness of the access restrictions and the efficiency of our algorithms are evaluated on a Facebook dataset.
Keywords: access control, blacklist, hybrid logic, online social networks (ID#: 15-6929)
URL: http://doi.acm.org/10.1145/2752952.2752967

 

Feng Wang, Mathias Kohler, Andreas Schaad; “Initial Encryption of Large Searchable Data Sets Using Hadoop,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 165–168. doi:10.1145/2752952.2752960
Abstract: With the introduction and the widely use of external hosted infrastructures, secure storage of sensitive data becomes more and more important. There are systems available to store and query encrypted data in a database, but not all applications may start with empty tables rather than having sets of legacy data. Hence, there is a need to transform existing plaintext databases to encrypted form. Usually existing enterprise databases may contain terabytes of data. A single machine would require many months for the initial encryption of a large data set. We propose encrypting data in parallel using a Hadoop cluster which is a simple five step process including the Hadoop set up, target preparation, source data import, encrypting the data, and finally exporting it to the target. We evaluated our solution on real world data and report on performance and data consumption. The results show that encrypting data in parallel can be done in a very scalable manner. Using a parallelized encryption cluster compared to a single server machine reduces the encryption time from months down to days or even hours.
Keywords: database, hadoop, performance, searchable encryption (ID#: 15-6930)
URL: http://doi.acm.org/10.1145/2752952.2752960

 

Bart Preneel; “Post-Snowden Threat Models,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 1–1. doi:10.1145/2752952.2752978
Abstract: In June 2013 Edward Snowden leaked a large collection of documents that describe the capabilities and technologies of the NSA and its allies. Even to security experts the scale, nature and impact of some of the techniques revealed was surprising. A major consequence is the increased awareness of the public at large of the existence of highly intrusive mass surveillance techniques. There has also been some impact in the business world, including a growing interest in companies that (claim to) develop end-to-end secure solutions. There is no doubt that large nation states and organized crime have carefully studied the techniques and are exploring which ones they can use for their own benefit. But after two years, there is little progress in legal or governance measures to address some of the excesses by increasing accountability. Moreover, the security research community seems to have been slow to respond to the new threat landscape. In this lecture we analyze these threats and speculate how they could be countered.
Keywords: information security, mass surveillance, system security, threat models (ID#: 15-6931)
URL: http://doi.acm.org/10.1145/2752952.2752978

 

Anna Cinzia Squicciarini, Ting Yu; “Privacy and Access Control: How are These Two Concepts Related?,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 197–198. doi:10.1145/2752952.2752980
Abstract: (not provided). Panel description and references available at URL: http://www.sacmat.org/2015/toc.html
Keywords: access control, privacy, security (ID#: 15-6932)
URL: http://doi.acm.org/10.1145/2752952.2752980

 

Jonathan Shahen, Jianwei Niu, Mahesh Tripunitara; “Mohawk+T: Efficient Analysis of Administrative Temporal Role-Based Access Control (ATRBAC) Policies,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 15–26. doi:10.1145/2752952.2752966
Abstract: Safety analysis is recognized as a fundamental problem in access control. It has been studied for various access control schemes in the literature. Recent work has proposed an administrative model for Temporal Role-Based Access Control (TRBAC) policies called Administrative TRBAC (ATRBAC). We address ATRBAC-safety. We first identify that the problem is PSPACE-Complete. This is a much tighter identification of the computational complexity of the problem than prior work, which shows only that the problem is decidable. With this result as the basis, we propose an approach that leverages an existing open-source software tool called Mohawk to address ATRBAC-safety. Our approach is to efficiently reduce ATRBAC-safety to ARBAC-safety, and then use Mohawk. We have conducted a thorough empirical assessment. In the course of our assessment, we came up with a “reduction toolkit,” which allows us to reduce Mohawk+T input instances to instances that existing tools support. Our results suggest that there are some input classes for which Mohawk+T outperforms existing tools, and others for which existing tools outperform Mohawk+T. The source code for Mohawk+T is available for public download.
Keywords: administration, role-based access control, safety analysis, temporal (ID#: 15-6933)
URL: http://doi.acm.org/10.1145/2752952.2752966

 

Marcos Cramer, Diego Agustín Ambrossio, Pieter Van Hertum; “A Logic of Trust for Reasoning about Delegation and Revocation,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 173–184. doi:10.1145/2752952.2752968
Abstract: In ownership-based access control frameworks with the possibility of delegating permissions and administrative rights, chains of delegated accesses will form. There are different ways to treat these delegation chains when revoking rights, which give rise to different revocation schemes. Hagström et al. [8] proposed a framework for classifying revocation schemes, in which the different revocation schemes are defined graph-theoretically; they motivate the revocation schemes in this framework by presenting various scenarios in which the agents have different reasons for revocating. This paper is based on the observation that there are some problems with Hagström et al.’s definitions of the revocation schemes, which have led us to propose a refined framework with new graph-theoretic definitions of the revocation schemes. In order to formally study the merits and demerits of various definitions of revocation schemes, we propose to apply the axiomatic method originating in social choice theory to revocation schemes. For formulating an axiom, i.e. a desirable property of revocation frameworks, we propose a logic, Trust Delegation Logic (TDL), with which one can formalize the different reasons an agent may have for performing a revocation. We show that our refined graph-theoretic definitions of the revocation schemes, unlike Hagström et al.’s original definitions, satisfy the desirable property that can be formulated using TDL.
Keywords: access control, delegation, logic, revocation, trust (ID#: 15-6934)
URL: http://doi.acm.org/10.1145/2752952.2752968

 

Trent Jaeger; “Challenges in Making Access Control Sensitive to the ‘Right’ Contexts,” in SACMAT ’15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies, June 2015, Pages 111–111. doi:10.1145/2752952.2752979
Abstract: Access control is a fundamental security mechanism that both protects processes from attacks and confines compromised processes that may try to propagate an attack. Nonetheless, we still see an ever increasing number of software vulnerabilities. Researchers have long proposed that improvements in access control could prevent many vulnerabilities, many of which capture contextual information to more accurately detect obviously unsafe operations. However, developers are often hesitant to extend their access control mechanisms to use more sensitive access control policies. My experience leads me to propose that it is imperative that an access control systems be able to extract context accurately and efficiently and be capable of inferring any non-trivial policies. In this talk, I will discuss some recent research that enforces context-sensitive policies by either extracting process context, integrating code to extract context from programs, or extracting user context. We find that context-sensitive mechanisms can prevent some obviously unsafe operations from being authorized efficiently and discuss our experiences in inferring access control policies. Based on this research, we are encouraged that future research may enable context-sensitive access control policies to be produced and enforced to prevent vulnerabilities.
Keywords: capabilities, context-sensitive, program analysis (ID#: 15-6935)
URL: http://doi.acm.org/10.1145/2752952.2752979


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Conferences: SIGMIS-CPR 2015, Newport Beach, CA

 
SoS Logo

International Conferences:

SIGMIS – Computers and People Research 2015

Newport Beach, CA


The ACM SIGMIS Computers and People Research 2015 conference met at Newport Beach, California on June 4-6, 2015. This year’s conference theme was the Cyber Security Workforce in the Global Context. Topics covered are related to the Hard Problem of human factors in cybersecurity.  



David H. Tobey; “A Vignette-Based Method for Improving Cybersecurity Talent Management through Cyber Defense Competition Design,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 31–39. doi:10.1145/2751957.2751963
Abstract: The preliminary findings are reported from a four-year study of cybersecurity competency assessment and development achieved through the design of cyber defense competitions. The first year of the study focused on identifying the abilities that should indicate aptitude to perform well in the areas of operational security testing and advanced threat response. A recently developed method for Job Performance Modeling (JPM) is applied which uses vignettes — critical incident stories — to guide the elicitation of a holistic description of mission-critical roles grounded in the latest tactics, techniques and protocols defining the current state-of-the-art, or ground truth, in cyber defense. Implications are drawn for design of scoring engines and achievement of game balance in cyber defense competitions as a talent management system.
Keywords: aptitude, competency model, critical incident, cyber defense competition, game balance, job performance model, ksa, talent management, vignette (ID#: 15-6936)
URL: http://doi.acm.org/10.1145/2751957.2751963

 

Leigh Ellen Potter, Gregory Vickers; “What Skills Do You Need to Work in Cyber Security?: A Look at the Australian Market,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015,
Pages 67–72. doi:10.1145/2751957.2751967
Abstract: The demand for cyber security professionals is rising as the incidence of cyber crime and security breaches increases, leading to suggestions of a skills shortage in the technology industry. While supply and demand are factors in the recruitment process for any position, in order to secure the best people in the security field we need to know what skills are required to be a security professional in the current cyber security environment. This paper seeks to explore this question by looking at the current state of the Australian Industry. Recent job listings in the cyber security area were analysed, and current security professionals in industry were asked for their opinion as to what skills were required in this profession. It was found that each security professional role has its own set of skill requirements, however there is significant overlap between the roles for many soft skills, including analysis, consulting and process skills, leadership, and relationship management. Both communication and presentation skills were valued. A set of “hard” skills emerged as common across all categories: experience, qualifications and certifications, and technical expertise. These appear to represent the need for a firm background in the security area as represented by formal study and industry certifications, and supported by solid experience in the industry. Specific technical skills are also required, although the exact nature of these will vary according to the requirements of each role.
Keywords: cyber security, security professional, skills (ID#: 15-6937)
URL:  http://doi.acm.org/10.1145/2751957.2751967

 

Nishtha Kesswani, Sanjay Kumar; “Maintaining Cyber Security: Implications, Cost and Returns,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 161–164. doi:10.1145/2751957.2751976
Abstract: Cyber security is one of the most critical issues that are faced globally by most of the countries and organizations. With the ever increasing use of computers and the internet, there has been tremendous growth of cyber-attacks. The attackers target not only high end companies but also banks and government agencies. As a result the companies and governments across the globe are sparing huge amount of money to create a cyber-secure niche. In every organization, whenever an investment has to be made, everybody is concerned about the return which the organization will be getting from that investment. Every investment has to be justified from the point of view of return. Investments made in cyber security are never preferred by the organizations as they do not give any return. Return on Investments made in Cyber security is not measured in terms of profits and gains, but rather in terms of prevented losses. This paper provides an insight in to various established approaches which can be used for measurement of return on cyber security investment. Cost-benefit analysis of cyber security investments can be useful to the organization to have insight into whether money is well spent or not.
Keywords: annual loss expectancy approach, cost benefit analysis, gordon and loeb approach, net present value approach
(ID#: 15-6938)
URL:  http://doi.acm.org/10.1145/2751957.2751976

 

Michelle L. Kaarst-Brown, E. Dale Thompson; “Cracks in the Security Foundation: Employee Judgments about Information Sensitivity,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 145–151. doi:10.1145/2751957.2751977
Abstract: Despite the increased focus on IT security, much of our reliance on “information sensitivity classifications” is based on broadly specified technical “access controls” or policies and procedures for the handling of organizational data — many of them developed incrementally over decades. One area ignored in research and practice is how human beings make “sensitivity judgments” or “classify” information they may encounter in everyday activities. This has left what we view as a crack in the IT security foundation. This crack has created a tension between formal IT security classification schema, technical controls, and policy, and the sensitivity judgments that everyday workers must make about the non-coded information they deal with. As noted in government and private reports, a new look at information sensitivity classification is vital to the expanding reach and criticality of information security. Based on a grounded theory study that elicited 188 judgements of sensitive information, we found valuable lessons for IT security in how workers, both in IT and outside of IT, recognize, classify, and react to their human judgments of sensitive information.
Keywords: classification, employee judgments, information sensitivity, it security, security awareness, security judgments
(ID#: 15-6939)
URL:  http://doi.acm.org/10.1145/2751957.2751977

 

Conrad Shayo, Javier Torner, Frank Lin, Jake Zhu, Joon Son; “Is Managing IT Security a Mirage?,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 97–98. doi:10.1145/2751957.2751970
Abstract: The purpose of this panel is to provide a forum to discuss the main IT security issues confronting organizations today. The panelists and attendees will discuss the existing gap between current IT security practices vs. best practices based on survey trends on IT security for the past 5 years, explore popular models used to justify IT security investments, and showcase some of the most popular hacking tools to demonstrate why it is so easy to compromise organizational IT security assets. The panel will conclude by discussing the emerging IT security standards and practices that may help deter, detect, and mitigate the impact of cyber-attacks. As the title suggests, we posit the question: Is Managing IT Security a Mirage?
Keywords: cyber-attacks, cybersecurity, hacking, information system risk, information system security, it vulnerability, ransomware, secure it infrastructure (ID#: 15-6940)
URL:  http://doi.acm.org/10.1145/2751957.2751970

 

Shuyuan Mary Ho, Hengyi Fu, Shashanka S. Timmarajus, Cheryl Booth, Jung Hoon Baeg, Muye Liu; “Insider Threat: Language-Action Cues in Group Dynamics,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 101–104. doi:10.1145/2751957.2751978
Abstract: Language as a symbolic medium plays an important role in virtual communications. Words communicated online as action cues can provide indications of an actor’s behavioral intent. This paper describes an ongoing investigation into the impact of a deceptive insider on group dynamics in virtual team collaboration. An experiment using an online game environment was conducted in 2014. Our findings support the hypothesis that language-action cues of group interactions will change significantly after an insider has been compromised and makes efforts to deceive. Furthermore, the language used in group dynamic interaction will tend to employ more cognition, inclusivity and exclusivity words when interacting with each other and with the focal insider. Future work will employ finely tuned complex Linguistic Inquiry and Word Count dictionaries to identify additional language-action cues for deception.
Keywords: insider threat detection, language-action cues., online deception, trusted human-computer interaction (ID#: 15-6941)
URL:  http://doi.acm.org/10.1145/2751957.2751978

 

Antoine Lemay, Sylvain P. Leblanc, Tiago de Jesus; “Lessons from the Strategic Corporal: Implications of Cyber Incident Response,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 61–66. doi:10.1145/2751957.2751965
Abstract: With the rise of cyber espionage the role of cyber incident responders is becoming more complex, but the personnel profile of incident handlers has remained constant. In this new environment, the strategic position of companies is being affected by operation personnel, including cyber incident responders, who have little to no awareness of the strategic implications of their technical decisions. In recent decades, the military has gone through a similar situation and has dubbed this new reality the “Strategic Corporal”. This paper analyzes cyber incident response through the theoretical framework of the Strategic Corporal to argue that today’s cyber incident responders fit that profile. The paper looks at three solutions put forward by the military, namely training, communication of the commander’s intent and embracing decentralization, and shows that these are viable solutions to make cyber incident responders ready to meet the current challenge.
Keywords: cyber incident response, cyber responder training, management of cyber responders, strategic impact of cyber decisions (ID#: 15-6942)
URL:  http://doi.acm.org/10.1145/2751957.2751965

 

Rinku Sen, Manojit Chattopadhyay, Nilanjan Sen; “An Efficient Approach to Develop an Intrusion Detection System Based on Multi Layer Backpropagation Neural Network Algorithm: IDS Using BPNN Algorithm,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 105–108. doi:10.1145/2751957.2751979
Abstract: The key success factor of the business depends upon correct and timely information. The vital resources of the organization should be protected from inside and outside threats. Among many threats of network security, intrusion has become a crucial reason for many organizations to incur loss. Many researchers are trying their level best to handle the different types of intrusion affecting the business. To detect such a type of intrusion, our initiative is to us a very popular soft computing tool namely back propagation neural network (BPNN). We have prepared a flexible BPNN architecture to identify the intrusion with the help of anomaly detection methodology. The result we obtained is better than or at per with many best research paper in this field of study. We have used KDD dataset for our experiment.
Keywords: anomaly detection, artificial neural network, bpnn, intrusion detection system, kdd cup 99 dataset (ID#: 15-6943)
URL:  http://doi.acm.org/10.1145/2751957.2751979

 

Masoud Hayeri Khyavi, Mina Rahimi; “The Missing Circle of ISMS (LL-ISMS),” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 73–77. doi:10.1145/2751957.2751972
Abstract: Information security management (ISMS) subject is a new area which has been discussed in various companies and organizations and many large and small security companies also are thinking of investigating on this topic. However experience has shown that imitation of a scientific and technological issue and its implementation at the national level not only showed best real effect of that ever(but also) has caused a huge waste of resources. In this paper, we have an idea for localization of ISMS which in regard to ISO standards and importance of this subject, prepares the facility and best area for research and work on ISMS. In this essay we introduce a new circle which covers a new level in ISMS subject
Keywords: management, security (ID#: 15-6944)
URL:  http://doi.acm.org/10.1145/2751957.2751972

 

Mark G. Graff; “Key Traits of Successful Cyber Security Practitioners,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 21–21. doi:10.1145/2751957.2751983
Abstract: The author’s view, formed over a decades-long career as a cyber security practitioner, is that successful professionals in the field have historically tended to share certain personality traits. Beyond the knack for problem solving and tolerance for late nights and vending machine food common in Information Technology (IT) circles, elements of integrity and character are, for example, often key to achievement in this career niche. The author describes several such traits, illustrating with informal case histories their operation and impact — both positive and negative. Implications for education, training and staffing in this field are also discussed.
Keywords: cyber security, education, management, personality, profession, staffing, training (ID#: 15-6945)
URL:  http://doi.acm.org/10.1145/2751957.2751983

 

Santos M. Galvez, Joshua D. Shackman, Indira R. Guzman, Shuyuan M. Ho; “Factors Affecting Individual Information Security Practices,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 135–144. doi:10.1145/2751957.2751966
Abstract: Data and information within organizations have become important assets that can create a significant competitive advantage and therefore need to be given careful attention. Research from industry has reported that the majority of security-related problems are indirectly caused by employees who disobey the information security policies of their organizations. This study proposes a model to evaluate the factors that influence the individual’s information security practices (IISP) at work. Drawing on social cognitive and control theories, the proposed model includes cognitive, environmental, and control factors as antecedents of ISSP. The findings of this study could be used to develop effective security policies and training. They could also be used to develop effective security audits and further recommendations for organizations that are looking to make significant improvements in their information security profiles
Keywords: control theory, information security behavior, information security practices, iso27002, mandatoriness, security standards, self-efficacy, social cognitive theory (ID#: 15-6946)
URL:  http://doi.acm.org/10.1145/2751957.2751966

 

Mohammad Mohammad; “IT Surveillance and Social Implications in the Workplace,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 79–85. doi:10.1145/2751957.2751959
Abstract: The workplace is where most adults spend roughly half of their waking hours. It is not surprising, therefore, that employment practices affect a broad range of privacy rights. With the exception of polygraph testing, there are few areas of workplace activities that are covered by the American constitution or privacy laws. Accordingly, employers have a great deal of leeway in collecting data on their employees, regulating access to personnel files, and disclosing file contents to outsiders. In addition to the issue of personnel files, workplace privacy involves such practices as polygraph testing, drug testing, computer and telephone monitoring, and interference with personal lifestyle. All of these practices stem from a combination of modern employer concerns employee theft, drug abuse, productivity, courtesy and the protection of trade secrets and technological advances that make it more economical to engage in monitoring and testing. The result for employees, however, is a dramatic increase in workplace surveillance. Unprecedented numbers of workers are urinating into bottles for employer run, drug-testing programs. Thousands of data entry operators have their every keystroke recorded by the very computers on which they are working. Surveillance is so thorough in some offices that employers can check to see exactly when employees leave their work stations to go to the bathroom and how long they take. A significant step toward resolving these issues can be taken by considering the possibilities and limitations posed by the extended use of surveillance and developing a model to balance these competing concerns. The model is proposed a master plan entitled "Monitoring Process Model (MPM)" showing the employers and employees and their inter-related activities. Which uses a thorough examination of the research literature, thus far to advocate the use of justifications for surveillance that Weigh Company interests against a notion of transactional privacy a form of privacy that focuses on trust and relationships.
Keywords: monitor, privacy, surveillance, trust (ID#: 15-6947)
URL:  http://doi.acm.org/10.1145/2751957.2751959

 

John R. Magrane, Jr.; “Personal Information Sharing with Major User Concerns in the Online B2C Market: A Social Contract Theory Perspective,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 7–8. doi:10.1145/2751957.2755507
Abstract: The cyber world has seen growth in the online business over the past two decades and e-commerce continues to expand. Moreover it has brought ease and comfort in the lives of the people and now there is no distinction of states and regions. Mainstream people can buy anything from anywhere in the world through web-platforms such as Amazon.com, thus enhancing e-commerce. However, the major concern that arises is the security apprehension. This research paper studies the willingness of the online shopper to disclose personal information. The study will use a conceptual model to examine customers’ online activities and how variables such as user trust, knowledge sharing behavior, and loyalty intentions influence users’ privacy concerns, and further moderated by one’s perceived environmental security in the B2C Internet market. Social Contract Theory (SCT) will be used to analyze the issue in the behavioral perspective, based on the human obligations towards one another and on the state as the supreme authority that establishes the principles that maintain the balance of a society.
Keywords: environmental security, knowledge sharing behavior, loyalty, personal information, privacy concerns, trust
(ID#: 15-6948)
URL:  http://doi.acm.org/10.1145/2751957.2755507

 

Tina Francis, Muthiya Madiajagan, Vijay Kumar; “Privacy Issues and Techniques in E-Health Systems,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 113–115. doi:10.1145/2751957.2751981
Abstract: During the present era, mobiles and smart devices are in abundance. A number of services have been provided through these devices. Ubiquitous services is gaining popularity in the present era. Ubiquity in healthcare is a sector which has gained importance in the current decade, as medical costs are not affordable to the common man. Ubiquitous healthcare has scope in seamlessly monitoring patients and identifying their health conditions. However privacy is at risk when using ubiquitous healthcare as personal health data are given to third party individuals for monitoring, storage and retrieval. This paper we proposes a privacy preserving model of an e-health system, so as to maintain the security of patient data across different domains in the e-health system.
Keywords: access control, access controls, cloud computing, cryptography, data encryption, cloud data security, patterns, security, security monitoring, trusted computing (ID#: 15-6949)
URL:  http://doi.acm.org/10.1145/2751957.2751981

 

Glourise M. Haya; “Complexity Reduction in Information Security Risk Assessment,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 5–6. doi:10.1145/2751957.2755506
Abstract: Results of research done by Dlamini et al. [5] clearly show information security was once focused around technical issues. However, over time, that approach transitioned to a more strategic governance model where legal and regulatory compliance, risk management, and digital forensics disciplines became the significant contributors in the domain. This focus has resulted in a proliferation of information security risk assessment models, which on the whole, have not necessarily helped to reduce risks or appropriately respond to security events. This research seeks to develop a new information security risk assessment model through the aggregation of existing models.
Keywords: information security, risk assessment, risk management (ID#: 15-6950)
URL: http://doi.acm.org/10.1145/2751957.2755506

 

Christian Sillaber, Ruth Breu; “Using Stakeholder Knowledge for Data Quality Assessment in IS Security Risk Management Processes,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 153–159. doi:10.1145/2751957.2751960
Abstract: The availability of high quality documentation of the IS as well as knowledgeable stakeholders are an important prerequisite for successful IS security risk management processes. However, little is known about the relationship between stakeholders, their knowledge about the IS, security documentation and how quality aspects influence the security and risk properties of the IS under investigation. We developed a structured data quality assessment process to identify quality issues in the security documentation of an information system. For this, organizational stakeholders were interviewed about the IS under investigation and models were created from their description in the context of an ongoing security risk management process process. Then, the research model was evaluated in a case study. We found that contradictions between the models created from stakeholder interviews and those created from documentation were a good indicator for potential security risks. The findings indicate that the proposed data quality assessment process provides valuable inputs for the ongoing security and risk management process. While current research considers users as the most important resource in security and risk management processes, little is known about the hidden value of various entities of documentation available at the organizational level. This study highlights the importance of utilizing existing IS security documentation in the security and risk management process and provides risk managers with a toolset for the prioritization of security documentation driven improvement activities.
Keywords: data quality of information system, information system security documentation quality, information systems security risk management (ID#: 15-6951)
URL: http://doi.acm.org/10.1145/2751957.2751960

 

Jordan Shropshire, Art Gowan; “Characterizing the Traits of Top-Performing Security Personnel,” in SIGMIS-CPR ’15 Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research, June 2015, Pages 55–59. doi:10.1145/2751957.2751971
Abstract: Organizational information security is a talent-centric proposition. Information assurance is a product of the combined expertise, attention-to-detail, and creativity of an information security team. A competitive edge can be obtained by hiring the top information security professionals. Therefore, identifying the right people is a mission-critical task. To assist in the candidate selection process, this research analyzes the enduring traits of top security performers. Specifically, it evaluates the Big Five Model of personality and the Six Workplace Values. In a laboratory study, 62 undergraduates majoring in information assurance completed a series of simulations which assessed their ability to solve various information security problems. The characteristics of top information security performers were contrasted against the rest of the cohort. In terms of personality, the top performers have high levels of conscientiousness and openness. With respect to workplace values, the top performers have a stronger preference for theoretical endeavors such as the pursuit of truth.
Keywords: employee attitudes, performance, personality, security (ID#: 15-6952)
URL: http://doi.acm.org/10.1145/2751957.2751971

 

Diana Burley, Indira R. Guzman, Daniel P. Manson, Leigh Ellen Potter;
Proceedings of the 2015 ACM SIGMIS Conference on Computers and People Research,” Newport Beach, CA, June 4–6, 2015. ACM, New York, NY. 2015. ISBN: 978-1-4503-3557-7
Abstract: It is our great pleasure to welcome you to the 2015 ACM SIGMIS Computers and People Research Conference -- CPR ’15. CPR has long been the premier forum for the presentation of research and experiential reports on themes related to developing and managing the information technology (IT) workforce. This year's conference extends that tradition with the theme: Cyber Security Workforce in the Global Context. CPR provides both researchers and practitioners with a unique opportunity to share their perspectives with others interested in the various aspects of building the IT workforce globally.  The call for papers attracted forty-seven submissions from global researchers. Submissions from Australia, Austria, Canada, France, Germany, India, Iran (Islamic Republic of), New Zealand, Pakistan, Singapore, United Arab Emirates, and the United States covered a variety of topics including; gaming and competitions related to information security, digital inequality, cyber security skills, teamwork, surveillance, and security judgment. The program includes five panels on cybersecurity workforce development, an industry panel, one focus group and a poster session. The doctoral consortium welcomes six Ph.D. students and we thank the generosity of the doctoral consortium mentors who will work to advance their research. In addition to the paper sessions, we also encourage participants to attend our keynote speech and invited presentations. These valuable and insightful talks can and will guide us to a better understanding of the future. We are pleased to highlight our keynote address: “Key Traits of Successful Cyber Security Practitioners,” Mark G. Graff of Tellagraff, LLC (most recently the CISO of NASDAQ and the 2014 Internet Security Executive of the Year for the Northeast United States). (ID#: 15-6953)
URL: http://dl.acm.org/citation.cfm?id=2751957&coll=DL&dl=GUIDE&CFID=546454935&CFTOKEN=60376420


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Conferences: WiSec 2015, New York

 

 
SoS Logo

International Conferences:

Security & Privacy in Wireless and Mobile Networks

2015, New York


The 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks (WiSec ’15 ) was held June 22–26, 2015 in New York. The focus of the conference was on the security and privacy aspects of wireless communications, mobile networks, mobile software platforms, and mobile or wireless applications, including both theoretical and systems contributions. The articles cited here cover privacy, resilience, and metrics.  



Pieter Robyns, Peter Quax, Wim Lamotte; “Injection Attacks on 802.11n MAC Frame Aggregation,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 13. doi:10.1145/2766498.2766513
Abstract: The ability to inject packets into a network is known to be an important tool for attackers: it allows them to exploit or probe for potential vulnerabilities residing on the connected hosts. In this paper, we present a novel practical methodology for injecting arbitrary frames into wireless networks, by using the Packet-In-Packet (PIP) technique to exploit the frame aggregation mechanism introduced in the 802.11n standard. We show how an attacker can apply this methodology over a WAN -- without physical proximity to the wireless network and without requiring a wireless interface card. The practical feasibility of our injection method is then demonstrated through a number of proof-of-concept attacks. More specifically, in these proof-of-concepts we illustrate how a host scan can be performed on the network, and how beacon frames can be injected from a remote location. We then both analytically and experimentally estimate the success rate of these attacks in a realistic test setup. Finally, we present several defensive measures that network administrators can put in place in order to prevent exploitation of our frame injection methodology.
Keywords: frame aggregation, injection attack, wireless security (ID#: 15-6872)
URL: http://doi.acm.org/10.1145/2766498.2766513

 

Lucky Onwuzurike, Emiliano De Cristofaro; “Danger Is My Middle Name: Experimenting with SSL Vulnerabilities in Android Apps,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 15. doi:10.1145/2766498.2766522
Abstract: This paper presents a measurement study of information leakage and SSL vulnerabilities in popular Android apps. We perform static and dynamic analysis on 100 apps, downloaded at least 10M times, that request full network access. Our experiments show that, although prior work has drawn a lot of attention to SSL implementations on mobile platforms, several popular apps (32/100) accept all certificates and all hostnames, and four actually transmit sensitive data unencrypted. We set up an experimental testbed simulating man-in-the-middle attacks and find that many apps (up to 91% when the adversary has a certificate installed on the victim’s device) are vulnerable, allowing the attacker to access sensitive information, including credentials, files, personal details, and credit card numbers. Finally, we provide a few recommendations to app developers and highlight several open research problems.
Keywords: Android security, information leakage, privacy (ID#: 15-6873)
URL: http://doi.acm.org/10.1145/2766498.2766522

 

Denzil Ferreira, Vassilis Kostakos, Alastair R. Beresford, Janne Lindqvist, Anind K. Dey; “Securacy: An Empirical Investigation of Android Applications’ Network Usage, Privacy And Security,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 11. doi:10.1145/2766498.2766506
Abstract: Smartphone users do not fully know what their apps do. For example, an applications’ network usage and underlying security configuration is invisible to users. In this paper we introduce Securacy, a mobile app that explores users’ privacy and security concerns with Android apps. Securacy takes a reactive, personalized approach, highlighting app permission settings that the user has previously stated are concerning, and provides feedback on the use of secure and insecure network communication for each app. We began our design of Securacy by conducting a literature review and in-depth interviews with 30 participants to understand their concerns. We used this knowledge to build Securacy and evaluated its use by another set of 218 anonymous participants who installed the application from the Google Play store. Our results show that access to address book information is by far the biggest privacy concern. Over half (56.4%) of the connections made by apps are insecure, and the destination of the majority of network traffic is North America, regardless of the location of the user. Our app provides unprecedented insight into Android applications’ communications behavior globally, indicating that the majority of apps currently use insecure network connections.
Keywords: applications, context, experience sampling, network, privacy (ID#: 15-6874)
URL: http://doi.acm.org/10.1145/2766498.2766506

 

Karim Emara, Wolfgang Woerndl, Johann Schlichter; “CAPS: Context-Aware Privacy Scheme for VANET Safety Applications,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 21. doi:10.1145/2766498.2766500
Abstract: Preserving location privacy in vehicular ad hoc networks (VANET) is an important requirement for public acceptance of this emerging technology. Many privacy schemes concern changing pseudonyms periodically to avoid linking messages. However, the spatiotemporal information contained in beacons makes vehicles traceable and the driver’s privacy breached. Therefore, the pseudonym change should be performed in a mix-context to discontinue the spatial and temporal correlation of subsequent beacons. Such mix-context is commonly accomplished by using a silence period or in predetermined locations (e.g., mix-zone). In this paper, we propose a location privacy scheme that lets vehicles decide when to change its pseudonym and enter a silence period and when to exit from it adaptively based on its context. In this scheme, a vehicle monitors the surrounding vehicles and enters silence when it finds one or more neighbors silent. It resumes beaconing with a new pseudonym when its actual state is likely to be mixed with the state of a silent neighbor. We evaluate this scheme against a global multi-target tracking adversary using simulated and realistic vehicle traces and compare it with the random silent period scheme. Furthermore, we evaluate the quality of service of a forward collision warning safety application to ensure its applicability in safety applications. We measure the quality of service by estimating the probability of correctly identifying the fundamental factors of that application using Monte Carlo analysis.
Keywords: context-aware privacy, forward collision warning, location privacy, random silent period, safety application
(ID#: 15-6875)
URL: http://doi.acm.org/10.1145/2766498.2766500

 

Célestin Matte, Jagdish Prasad Achara, Mathieu Cunche; “Device-to-Identity Linking Attack Using Targeted Wi-Fi Geolocation Spoofing,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 20. doi:10.1145/2766498.2766521
Abstract: Today, almost all mobile devices come equipped with Wi-Fi technology. Therefore, it is essential to thoroughly study the privacy risks associated with this technology. Recent works have shown that some Personally Identifiable Information (PII) can be obtained from the radio signals emitted by Wi-Fi equipped devices. However, most of the times, the identity of the subject of those pieces of information remains unknown and the Wi-Fi MAC address of the device is the only available identifier. In this paper, we show that it is possible for an attacker to get the identity of the subject.  The attack presented in this paper leverages the geolocation information published on some geotagged services, such as Twitter, and exploits the fact that geolocation information obtained through Wi-Fi-based Positioning System (WPS) can be easily manipulated. We show that geolocation manipulation can be targeted to a single device, and in most cases, it is not necessary to jam real Wi-Fi access points (APs) to mount a successful attack on WPS.
Keywords: 802.11, geolocation, privacy (ID#: 15-6876)
URL: http://doi.acm.org/10.1145/2766498.2766521

 

Xin Chen, Sencun Zhu; “DroidJust: Automated Functionality-Aware Privacy Leakage Analysis for Android Applications,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article
No. 5. doi:10.1145/2766498.2766507
Abstract: Android applications (apps for short) can send out users’ sensitive information against users’ intention. Based on the stats from Genome and Mobile-Sandboxing, 55.8% and 59.7% Android malware families feature privacy leakage. Prior approaches to detecting privacy leakage on smartphones primarily focused on the discovery of sensitive information flows. However, Android apps also send out users’ sensitive information for legitimate functions. Due to the fuzzy nature of the privacy leakage detection problem, we formulate it as a justification problem, which aims to justify if a sensitive information transmission in an app serves any purpose, either for intended functions of the app itself or for other related functions. This formulation makes the problem more distinct and objective, and therefore more feasible to solve than before. We propose DroidJust, an automated approach to justifying an app’s sensitive information transmission by bridging the gap between the sensitive information transmission and application functions. We also implement a prototype of DroidJust and evaluate it with over 6000 Google Play apps and over 300 known malware collected from VirusTotal. Our experiments show that our tool can effectively and efficiently analyze Android apps w.r.t their sensitive information flows and functionalities, and can greatly assist in detecting privacy leakage.
Keywords: Android security, privacy leakage detection, static taint analysis (ID#: 15-6877)
URL: http://doi.acm.org/10.1145/2766498.2766507

 

Elena Pagnin, Anjia Yang, Gerhard Hancke, Aikaterini Mitrokotsa; “HB+DB, Mitigating Man-in-the-Middle Attacks Against HB+ with Distance Bounding,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 3.  doi:10.1145/2766498.2766516
Abstract: Authentication for resource-constrained devices is seen as one of the major challenges in current wireless communication networks. The HB+ protocol performs device authentication based on the learning parity with noise (LPN) problem and simple computational steps, that renders it suitable for resource-constrained devices such as radio frequency identification (RFID) tags. However, it has been shown that the HB+ protocol as well as many of its variants are vulnerable to a simple man-in-the-middle attack. We demonstrate that this attack could be mitigated using physical layer measures from distance-bounding and simple modifications to devices’ radio receivers. Our hybrid solution (HB+DB) is shown to provide both effective distance-bounding using a lightweight HB+-based response function, and resistance against the man-in-the-middle attack to HB+. We provide experimental evaluation of our results as well as a brief discussion on practical requirements for secure implementation.
Keywords: HB-protocol, HB+, distance bounding, physical layer security  (ID#: 15-6878)
URL: http://doi.acm.org/10.1145/2766498.2766516

 

Marcin Nagy, Thanh Bui, Emiliano De Cristofaro, N. Asokan, Jörg Ott, Ahmad-Reza Sadeghi; “How Far Removed Are You?: Scalable Privacy-Preserving Estimation of Social Path Length With Social Pal,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 18. doi:10.1145/2766498.2766501
Abstract: Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other.  This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system — e.g., discovering 70% of all paths with only 40% of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps.
Keywords: mobile social networks, privacy, proximity (ID#: 15-6879)
URL: http://doi.acm.org/10.1145/2766498.2766501

 

Meiko Jensen; “Applying the Protection Goals for Privacy Engineering to Mobile Devices,” in WiSec ’15 Proceedings
of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks
, June 2015, Article No. 26. doi:10.1145/2766498.2774986
Abstract: In this paper, we propose to use a set of common core principles (the protection goals for privacy engineering) for measuring and comparing privacy features of mobile device systems. When utilized as a baseline for mobile phone software development, these protection goals can help with acting in legal compliance independent from the exact juridical location of the user.
Keywords: (not provided) (ID#: 15-6880)
URL: http://doi.acm.org/10.1145/2766498.2774986

 

Guqian Dai, Jigang Ge, Minghang Cai, Daoqian Xu, Wenjia Li; “SVM-Based Malware Detection for Android Applications,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 33. doi:10.1145/2766498.2774991
Abstract: In this paper, we study a SVM-based malware detection scheme for Android application, which integrates both risky permission combinations and vulnerable API calls and use them as features in the SVM algorithm. Preliminary experiments have validated the proposed malware detection scheme.
Keywords: Android, TF-IDF, malware, support vector machine (SVM) (ID#: 15-6881)
URL: http://doi.acm.org/10.1145/2766498.2774991

 

Xingmin Cui, Jingxuan Wang, Lucas C. K. Hui, Zhongwei Xie, Tian Zeng, S. M. Yiu; “WeChecker: Efficient and Precise Detection of Privilege Escalation Vulnerabilities in Android Apps,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 25. doi:10.1145/2766498.2766509
Abstract: Due to the rapid increase of Android apps and their wide usage to handle personal data, a precise and large-scaling checker is in need to validate the apps’ permission flow before they are listed on the market. Several tools have been proposed to detect sensitive data leaks in Android apps. But these tools are not applicable to large-scale analysis since they fail to deal with the arbitrary execution orders of different event handlers smartly. Event handlers are invoked by the framework based on the system state, therefore we cannot pre-determine their order of execution. Besides, since all exported components can be invoked by an external app, the execution orders of these components are also arbitrary. A naive way to simulate these two types of arbitrary execution orders yields a permutation of all event handlers in an app. The time complexity is O(n!) where n is the number of event handlers in an app. This leads to a high analysis overhead when n is big. To give an illustration, CHEX [10] found 50.73 entry points of 44 unique class types in an app on average. In this paper we propose an improved static taint analysis to deal with the challenge brought by the arbitrary execution orders without sacrificing the high precision. Our analysis does not need to make permutations and achieves a polynomial time complexity. We also propose to unify the array and map access with object reference by propagating access paths to reduce the number of false positives due to field-insensitivity and over approximation of array access and map access.  We implement a tool, WeChecker, to detect privilege escalation vulnerabilities in Android apps. WeChecker achieves 96% precision and 96% recall in the state-of-the-art test suite DriodBench (for compairson, the precision and recall of FlowDroid  are 86% and 93%, respectively). The evaluation of WeChecker on real apps shows that it is efficient (average analysis time of each app: 29.985s) and fits for large-scale checking.
Keywords: Android, control flow, data flow checking, privilege escalation attack, taint analysis (ID#: 15-6882)
URL: http://doi.acm.org/10.1145/2766498.2766509

 

Daniel T. Wagner, Daniel R. Thomas, Alastair R. Beresford, Andrew Rice; “Device Analyzer: A Privacy-Aware Platform to Support Research on the Android Ecosystem,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 34. doi:10.1145/2766498.2774992
Abstract: Device Analyzer is an Android app available from the Google Play store. It is designed to collect a large range of data from the handset and, with agreement from our contributors, share it with researchers around the world. Researchers can access the data collected, and can also use the platform to support their own user studies. In this paper we provide an overview of the privacy-enhancing techniques used in Device Analzyer, including transparency, consent, purpose, access, withdrawal, and accountability. We also demonstrate the utility of our platform by assessing the security of the Android ecosystem to privilege escalation attacks and determine that 88% of Android devices are, on average, vulnerable to one or more of these type of attacks.
Keywords: (not provided) (ID#: 15-6883)
URL: http://doi.acm.org/10.1145/2766498.2774992

 

Yajin Zhou, Lei Wu, Zhi Wang, Xuxian Jiang; “Harvesting Developer Credentials in Android Apps,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 23. 
doi:10.1145/2766498.2766499
Abstract: Developers often integrate third-party services into their apps. To access a service, an app must authenticate itself to the service with a credential. However, credentials in apps are often not properly or adequately protected, and might be easily extracted by attackers. A leaked credential could pose serious privacy and security threats to both the app developer and app users.  In this paper, we propose CredMiner to systematically study the prevalence of unsafe developer credential uses in Android apps. CredMiner can programmatically identify and recover (obfuscated) developer credentials unsafely embedded in Android apps. Specifically, it leverages data flow analysis to identify the raw form of the embedded credential, and selectively executes the part of the program that builds the credential to recover it. We applied CredMiner to 36,561 apps collected from various Android markets to study the use of free email services and Amazon AWS. There were 237 and 196 apps that used these two services, respectively. CredMiner discovered that 51.5% (121/237) and 67.3% (132/196) of them were vulnerable. In total, CredMiner recovered 302 unique email login credentials and 58 unique Amazon AWS credentials, and verified that 252 and 28 of these credentials were still valid at the time of the experiments, respectively.
Keywords: Amazon AWS, CredMiner, information flow, static analysis (ID#: 15-6884)
URL: http://doi.acm.org/10.1145/2766498.2766499

 

Sadegh Farhang, Yezekael Hayel, Quanyan Zhu; “Physical Layer Location Privacy Issue in Wireless Small Cell Networks,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 32. doi:10.1145/2766498.2774990
Abstract: High data rates are essential for next-generation wireless networks to support a growing number of computing devices and networking services. Small cell base station (SCBS) (e.g., picocells, microcells, femtocells) technology is a cost-effective solution to address this issue. However, one challenging issue with the increasingly dense network is the need for a distributed and scalable access point association protocol. In addition, the reduced cell size makes it easy for an adversary to map out the geographical locations of the mobile users, and hence breaching their location privacy. To address these issues, we establish a game-theoretic framework to develop a privacy-preserving stable matching algorithm that captures the large scale and heterogeneity nature of 5G networks. We show that without the privacy-preserving mechanism, an attacker can infer the location of the users by observing wireless connections and the knowledge of physical-layer system parameters. The protocol presented in this work provides a decentralized differentially private association algorithm which guarantees privacy to a large number of users in the network. We evaluate our algorithm using case studies, and demonstrate the tradeoff between privacy and system-wide performance for different privacy requirements and a varying number of mobile users in the network. Our simulation results corroborate the result that the total number of mobile users should be lower than the overall network capacity to achieve desirable levels of privacy and QoS.
Keywords: (not provided) (ID#: 15-6885)
URL: http://doi.acm.org/10.1145/2766498.2774990

 

Dan Ping, Xin Sun, Bing Mao; “TextLogger: Inferring Longer Inputs on Touch Screen Using Motion Sensors,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 24. doi:10.1145/2766498.2766511
Abstract: Today’s smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users’ privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked. To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.
Keywords: edit distance model, keystroke inference using motion sensors, language model, machine learning, shared memory side-channel, side-channel attacks, smartphone security (ID#: 15-6886)
URL: http://doi.acm.org/10.1145/2766498.2766511

 

Daibin Wang, Haixia Yao, Yingjiu Li, Hai Jin, Deqing Zou, Robert H. Deng; “CICC: A Fine-Grained, Semantic-Aware, and Transparent Approach to Preventing Permission Leaks for Android Permission Managers,” in WiSec ’15 Proceedings
of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks
, June 2015, Article No. 6. doi:10.1145/2766498.2766518
Abstract: Android’s permission system offers an all-or-nothing installation choice for users. To make it more flexible, users may choose a popular app tool, called permission manager, to selectively grant or revoke an app’s permissions at runtime. A fundamental requirement for such permission manager is that the granted or revoked permissions should be enforced faithfully. However, we discover that none of existing permission managers meet this requirement due to permission leaks. To address this problem, we propose CICC, a fine-grained, semantic-aware, and transparent approach for any permission managers to defend against the permission leaks. Compared to existing solutions, CICC is fine-grained because it detects the permission leaks using call-chain information at the component instance level, instead of at the app level or component level. The fine-grained feature enables it to generate a minimal impact on the usability of running apps. CICC is semantic-aware in a sense that it manages call-chains in the whole lifecycle of each component instance. CICC is transparent to users and app developers, and it requires minor modification to permission managers. Our evaluation shows that CICC incurs relatively low performance overhead and power consumption.
Keywords: Android, call-chain, permission leaks, permission manager (ID#: 15-6887)
URL: http://doi.acm.org/10.1145/2766498.2766518

 

David Förster, Frank Kargl, Hans Löhr; “A Framework for Evaluating Pseudonym Strategies in Vehicular Ad-Hoc Networks,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 19. doi:10.1145/2766498.2766520
Abstract: The standard approach to privacy-friendly authentication in vehicular ad-hoc networks is the use of pseudonym certificates. The level of location privacy users can enjoy under the threat of an attacker depends on the attacker’s coverage and strategy as well as on the users’ strategy for changing their pseudonym certificates.  With this paper, we propose a generic framework for evaluation and comparison of different pseudonym change strategies with respect to the privacy level they provide under the threat of a realistic, local, passive attacker. To illustrate the applicability of this framework, we propose a new tracking strategy that achieves unprecedented success in vehicle tracking and thus lowers the achievable location privacy significantly. We use this attacker as a means to evaluate different pseudonym change strategies and highlight the need for more research in this direction.
Keywords: location privacy, pseudonym systems, vehicular ad-hoc networks (ID#: 15-6888)
URL: http://doi.acm.org/10.1145/2766498.2766520

 

Daniel Steinmetzer, Matthias Schulz, Matthias Hollick; “Lockpicking Physical Layer Key Exchange: Weak Adversary Models Invite the Thief,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 1. doi:10.1145/2766498.2766514
Abstract: Physical layer security schemes for wireless communications are currently crossing the chasm from theory to practice. They promise information-theoretical security, for instance by guaranteeing the confidentiality of wireless transmissions. Examples include schemes utilizing artificial interference—that is ‘jamming for good’—to enable secure physical layer key exchange or other security mechanisms. However, only little attention has been payed to adjusting the employed adversary models during this transition from theory to practice. Typical assumptions give the adversary antenna configurations and transceiver capabilities similar to all other nodes: single antenna eavesdroppers are the norm. We argue that these assumptions are perilous and ‘invite the thief’. In this work, we evaluate the security of a representative practical physical layer security scheme, which employs artificial interference to secure physical layer key exchange. Departing from the standard single-antenna eavesdropper, we utilize a more realistic multi-antenna eavesdropper and propose a novel approach that detects artificial interferences. This facilitates a practical attack, effectively ‘lockpicking’ the key exchange by exploiting the diversity of the jammed signals. Using simulation and real-world software-defined radio (SDR) experimentation, we quantify the impact of increasingly strong adversaries. We show that our approach reduces the secrecy capacity of the scheme by up to 97% compared to single-antenna eavesdroppers. Our results demonstrate the risk unrealistic adversary models pose in current practical physical layer security schemes.
Keywords: OFDM, SDR, WARP, artificial interference, friendly jamming, key exchange, physical layer security (ID#: 15-6889)
URL: http://doi.acm.org/10.1145/2766498.2766514

 

Max Maass, Uwe Müller, Tom Schons, Daniel Wegemer, Matthias Schulz; “NFCGate: An NFC Relay Application for Android,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 27. doi:10.1145/2766498.2774984
Abstract: Near Field Communication (NFC) is a technology widely used for security-critical applications like access control or payment systems. Many of these systems rely on the security assumption that the card has to be in close proximity to communicate with the reader. We developed NFCGate, an Android application capable of relaying NFC communication between card and reader using two rooted but otherwise unmodified Android phones. This enables us to increase the distance between card and reader, eavesdrop on, and even modify the exchanged data. The application should work for any system built on top of ISO 14443-3 that is not hardened against relay attacks, and was successfully tested with a popular contactless card payment system and an electronic passport document.
Keywords: Android, near field communication, relay attack (ID#: 15-6890)
URL: http://doi.acm.org/10.1145/2766498.2774984

 

Roberto Gallo, Patricia Hongo, Ricardo Dahab, Luiz C. Navarro, Henrique Kawakami, Kaio Galvão, Glauber Junqueira, Luander Ribeiro; “Security and System Architecture: Comparison of Android Customizations,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 12. doi:10.1145/2766498.2766519
Abstract: Smartphone manufacturers frequently customize Android distributions so as to create competitive advantages by adding, removing and modifying packages and configurations. In this paper we show that such modifications have deep architectural implications for security. We analysed five different distributions: Google Nexus 4, Google Nexus 5, Sony Z1, Samsung Galaxy S4 and Samsung Galaxy S5, all running OS versions 4.4.X (except for Samsung S4 running version 4.3). Our conclusions indicate that serious security issues such as expanded attack surface and poorer permission control grow sharply with the level of customization.
Keywords: Android customizations, permissions, security architecture (ID#: 15-6891)
URL: http://doi.acm.org/10.1145/2766498.2766519

 

Wanqing You, Kai Qian, Minzhe Guo, Prabir Bhattacharya, Ying Qian, Lixin Tao,”A Hybrid Approach for Mobile Security Threat Analysis,” in WiSec ’15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 28 doi:10.1145/2766498.2774987
Abstract: Research on effective and efficient mobile threat analysis becomes an emerging and important topic in cybersecurity research area. Static analysis and dynamic analysis constitute two of the most popular types of techniques for security analysis and evaluation; nevertheless, each of them has its strengths and weaknesses. To leverage the benefits of both approaches, we propose a hybrid approach that integrates the static and dynamic analysis for detecting security threats in mobile applications. The key of this approach is the unification of data states and software execution on critical test paths. The approach consists of two phases. In the first phase, a pilot static analysis is conducted to identify potential critical attack paths based on Android APIs and existing attack patterns. In the second phase, a dynamic analysis follows the identified critical paths to execute the program in a limited and focused manner. Attacks shall be detected by checking the conformance of the detected paths with existing attack patterns. The method will report the types of detected attack scenarios based on types of sensitive data that may be compromised, such as web browser cookie.
Keywords: Android application analysis, data path tracing, dynamic analysis, static analysis, symbolic execution (ID#: 15-6892)
URL: http://doi.acm.org/10.1145/2766498.2774987


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

SoS Logo

Publications of Interest

The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:15-7668)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

​Asymmetric Encryption 2015

 

 
SoS Logo

Asymmetric Encryption

2015


Asymmetric, or public-key, encryption is a cornerstone of cybersecurity. The research presented here looks at key distribution, compares symmetric and asymmetric security, and evaluates cryptographic algorithms, among other approaches. For the Science of Security community, encryption is a primary element for resiliency, compositionality, metrics, and behavior. The work here was published in 2015.



Ahmad, S.; Alam, K.M.R.; Rahman, H.; Tamura, S., “A Comparison Between Symmetric and Asymmetric Key Encryption Algorithm Based Decryption Mixnets,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1–5, 5–7 Jan. 2015. doi:10.1109/NSysS.2015.7043532
Abstract: This paper presents a comparison between symmetric and asymmetric key encryption algorithm based decryption mixnets through simulation. Mix-servers involved in a decryption mixnet receive independently and repeatedly encrypted messages as their input, then successively decrypt and shuffle them to generate a new altered output from which finally the messages are regained. Thus mixnets confirm unlinkability and anonymity between senders and the receiver of messages. Both symmetric (e.g. onetime pad, AES) and asymmetric (e.g. RSA and ElGamal cryptosystems) key encryption algorithms can be exploited to accomplish decryption mixnets. This paper evaluates both symmetric (e.g. ESEBM: enhanced symmetric key encryption based mixnet) and asymmetric (e.g. RSA and ElGamal based) key encryption algorithm based decryption mixnets. Here they are evaluated based on several criteria such as: the number of messages traversing through the mixnet, the number of mix-servers involved in the mixnet and the key length of the underlying cryptosystem. Finally mixnets are compared on the basis of the computation time requirement for the above mentioned criteria while sending messages anonymously.
Keywords: electronic messaging; message authentication; public key cryptography; AES; ElGamal based decryption mixnet; RSA based decryption mixnet; asymmetric key encryption algorithm based decryption mixnet; message encryption; message sending; onetime pad; symmetric key encryption algorithm based decryption mixnet; Algorithm design and analysis; Encryption; Generators; Public key; Receivers;  Servers; Anonymity; ElGamal; Mixnet; Privacy; Protocol; RSA; Symmetric key encryption algorithm (ID#: 15-7432)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043532&isnumber=7042935

 

Aggarwal, K.; Verma, H.K., “Hash_RC6 — Variable Length Hash Algorithm Using RC6,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 450–456, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164747
Abstract: In this paper, we present a hash algorithm using RC6 that can generate hash value of variable length. Hash algorithms play major part in cryptographic security as these algorithms are used to check the integrity of the received message. It is possible to generate hash algorithm using symmetric block cipher. The main idea behind this is that if the symmetric block algorithm is secure then the generated hash function will also be secure [1]. As RC6 is secure against various linear and differential attacks algorithm presented here will also be secure against these attack. The algorithm presented here can have variable number of rounds to generate hash value. It can also have variable block size.
Keywords: cryptography; Hash_RC6 - variable length hash algorithm; cryptographic security; differential attacks algorithm; generated hash function; linear attack algorithm; received message; symmetric block algorithm; symmetric block cipher; Ciphers; Computers; Encryption; Receivers; Registers; Throughput; Access Control; Asymmetric Encryption; Authentication; Confidentiality; Cryptography; Data Integrity; Hash; Non-Repudiation; RC6; Symmetric Encryption (ID#: 15-7433)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164747&isnumber=7164643

 

Saleh, Mohammed A.; Tahir, Nooritawati Md.; Hisham, Ezril; Hashim, Habibah, “An Analysis and Comparison for Popular Video Encryption Algorithms,” in Computer Applications & Industrial Electronics (ISCAIE), 2015 IEEE Symposium on, vol., no.,
pp. 90–94, 12–14 April 2015. doi:10.1109/ISCAIE.2015.7298334
Abstract: The security of video in the communication field became the major concern, especially after the rapid development of multimedia technology (internet and mobile devices). Since the using of multimedia data transmission become more and more due to the wide internet using all around the world, the video protection techniques, is becoming necessary to keep that information not accessible by irrelevant public or malicious attackers. The researchers have designed different type of encryption algorithms to secure the multimedia data, that algorithms have their strength and weakness points. In this paper, we will focus on introducing and comparison between the three popular encryption algorithms, DES, RSA and AES, as well to choose which encryption algorithm can be used to exchange video safely, and maintain the balancing between the security and computational time.
Keywords: Algorithm design and analysis; Classification algorithms; Encryption; Standards; Streaming media; Advanced Encryption Standard (AES). Encryption algorithms comparison; Asymmetric Encryption; Data Encryption Standard (DES); Rivest-Shamir-Adleman (RSA); Symmetric encryption (ID#: 15-7434)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298334&isnumber=7298288

 

Thomas, M.; Panchami, V., “An Encryption Protocol for End-to-End Secure Transmission of SMS,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–6, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159471
Abstract: Short Message Service (SMS) is a process of transmission of short messages over the network. SMS is used in daily life applications including mobile commerce, mobile banking, and so on. It is a robust communication channel to transmit information. SMS pursue a store and forward way of transmitting messages. The private information like passwords, account number, passport number, and license number are also send through message. The traditional messaging service does not provide security to the message since the information contained in the SMS transmits as plain text from one mobile phone to other. This paper explains an efficient encryption protocol for securely transmitting the confidential SMS from one mobile user to other which serves the cryptographic goals like confidentiality, authentication and integrity to the messages. The Blowfish encryption algorithm gives confidentiality to the message, the EasySMS protocol is used to gain authentication and MD5 hashing algorithm helps to achieve integrity of the messages. Blowfish algorithm utilizes only less battery power when compared to other encryption algorithms. The protocol prevents various attacks, including SMS disclosure, replay attack, man-in-the middle attack and over the air modification.
Keywords: cryptographic protocols; data integrity; data privacy; electronic messaging; message authentication; mobile radio; Blowfish encryption algorithm; SMS disclosure; encryption protocol; end-to-end secure transmission; man-in-the middle attack; message authentication; message confidentiality; message integrity; mobile phone; over the air modification; replay attack; short message service; Authentication; Encryption; Mobile communication; Protocols; Throughput; Asymmetric Encryption; Cryptography; Secure Transmission; Symmetric Encryption (ID#: 15-7435)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159471&isnumber=7159156

 

Chhotaray, S.K.; Chhotaray, A.; Rath, G.S., “A New Method of Generating Public Key Matrix and Using It for Image Encryption,” in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, vol., no., pp. 453–458, 19–20 Feb. 2015. doi:10.1109/SPIN.2015.7095272
Abstract: It is very difficult to find the inverse of a matrix in Galois field using standard matrix inversion algorithms. Hence, any block-based encryption process involving matrix as a key will take considerable amount of time for decryption. The inverse of a self-invertible matrix is the matrix itself. So, if these matrices are used for encryption, the computational time of the decryption algorithm reduces significantly. In this paper, a new method of generating self-invertible matrix is presented. In addition to this, a new method of generating sparse matrices based on a polynomial function and the process of inversion of this matrix without using standard matrix inversion algorithms is also presented. The product of these two types of matrices constitute the public key matrix whereas the matrices individually act as the private keys. This matrix will have a large domain and can also be used to design an asymmetric encryption technique. The inverse of the key matrix can be calculated easily by the receiver provided the components of the key i.e. the self-invertible and the sparse matrices are known. This public key is used to encrypt images using standard image encryption algorithm and it is tested with various gray-scale images. After encryption, the images are found to be completely scrambled. The image encryption process has very low computational complexity which is evident from comparison with AES(128). Moreover, since the number of key matrices are huge, brute force attack becomes very difficult.
Keywords: Galois fields; computational complexity; image processing; matrix inversion; public key cryptography; sparse matrices; AES(128); Galois field; asymmetric encryption technique; block-based encryption process; decryption algorithm; gray-scale image; image encryption process; polynomial function; public key matrix; self-invertible matrix; standard matrix inversion algorithm; Algorithm design and analysis; Encryption; Public key; Signal processing algorithms; Sparse matrices; Standards (ID#: 15-7436)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095272&isnumber=7095159

 

Idzikowska, E., “Faults Detection Schemes for PP-2 Cipher,” in Military Communications and Information Systems (ICMCIS), 2015 International Conference on, vol., no., pp. 1–4, 18–19 May 2015. doi:10.1109/ICMCIS.2015.7158695
Abstract: Hardware implementations of cryptographic systems are becoming more and more popular, due to new market needs and to reduce costs. However, system security may be seriously compromised by implementation attacks, such as side channel analysis or fault analysis. Fault-based side-channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overheads. In this paper we investigate systematic approaches to low-cost CED techniques for symmetric encryption algorithm PP-2, based on inverse relationships that exist between encryption and decryption at algorithm level, round level, and operation level. We show architectures that explore tradeoffs among performance penalty, area overhead, and fault detection latency.
Keywords: cryptography; error detection; fault diagnosis; redundancy; CED architectures; PP-2 cipher; algorithm level decryption; asymmetric encryption algorithms; cryptographic systems; fault analysis; fault detection latency; fault detection schemes; fault-based side-channel cryptanalysis; hardware implementations; implementation attacks; low-cost CED techniques; operation level decryption; round level decryption; side channel analysis; symmetric encryption algorithm; system security; time redundancy based concurrent error detection architectures; Ciphers; Encryption; Fault detection; Hardware; Redundancy; Registers; CED; PP-2; error detection latency; fault detection; hardware redundancy; time redundancy (ID#: 15-7437)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158695&isnumber=7158667

 

Touati, L.; Challal, Y., “Batch-Based CP-ABE with Attribute Revocation Mechanism for the Internet of Things,” in Computing, Networking and Communications (ICNC), 2015 International Conference on, vol., no., pp. 1044–1049, 16–19 Feb. 2015. doi:10.1109/ICCNC.2015.7069492
Abstract: Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is an extremely powerful asymmetric encryption mechanism, it allows to achieve fine-grained access control. However, there is no solution to manage efficiently key/attribute revocation problem in CP-ABE scheme. Key revocation problem is very important in dynamic environment like Internet of Things (IoT), where billions of things are connected together and are cooperating without human intervention. Existing solutions are not efficient due to their overhead (traffic) and complexity (big access trees). Other solutions require the use of powerful semi-trusted proxies to re-encrypt data. The proposed solution in this paper called Batch-Based CP-ABE reduces the complexity and the overhead, and does not require extra nodes in the system. We propose to split time axis into intervals (time slots) and to send only the necessary key parts to allow refreshing the secrets keys. An analysis is conducted on the way to choose the best time slot duration in order to maximize system performances and minimize average waiting time.
Keywords: Internet of Things; authorisation; computational complexity; public key cryptography; Internet-of-things; asymmetric encryption mechanism; attribute revocation mechanism; average waiting time minimization; batch-based CP-ABE scheme; best time slot duration; ciphertext-policy attribute-based encryption; complexity reduction; data re-encryption; fine-grained access control; key revocation problem; public key encryption mechanism; semi trusted proxies; system performance maximization; Complexity theory; Encryption; Internet of things; Polynomials; Wireless networks; Access Control; Attribute Revocation; Batch-Based; CP-ABE (ID#: 15-7438)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069492&isnumber=7069279

 

Shahzad, F., “Safe Haven in the Cloud: Secure Access Controlled File Encryption (SAFE) System,” in Science and Information Conference (SAI), 2015, vol., no., pp. 1329–1334, 28–30 July 2015. doi:10.1109/SAI.2015.7237315
Abstract: The evolution of cloud computing has revolutionized how the computing is abstracted and utilized on remote third party infrastructure. It is now feasible to try out novel ideas over the cloud with no or very low initial cost. There are challenges in adopting cloud computing; but with obstacles, we have opportunities for research in several aspects of cloud computing. One of the main issue is the data security and privacy of information stored and processed at cloud provider’s systems. In this work, a practical system (called SAFE) is designed and implemented to securely store/retrieve user’s files on the third party cloud storage systems using well established cryptographic techniques. It utilizes the client-side, multilevel, symmetric/asymmetric encryption and decryption operations to provide policy-based access control and assured deletion of remotely hosted client’s files. The SAFE is a generic application which can be extended to support any cloud storage provider as long as there is an API which support basic file upload and download operations.
Keywords: application program interfaces; authorisation; client-server systems; cloud computing; computer network security; cryptography; data privacy; outsourcing; API; SAFE system; client-side-multilevel asymmetric encryption operation; client-side-multilevel symmetric encryption operation; client-side-multilevel-asymmetric decryption operation; client-side-multilevel-symmetric decryption operation; cloud computing; cloud provider systems; cloud storage provider; cryptographic techniques; data security; file download operation; file upload operation; information privacy; policy-based access control; remote third-party infrastructure; remotely hosted client file deletion; secure access controlled file encryption system; third-party cloud storage systems; user file retrieval; user file storage; Access control; Cloud computing; Encryption; Java; Servers; Assured deletion; Cryptography; Data privacy; Secure storage (ID#: 15-7439)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237315&isnumber=7237120

 

Chatterjee, S.; Gupta, A.K.; Sudhakar, G.V., “An Efficient Dynamic Fine Grained Access Control Scheme for Secure Data Access in Cloud Networks,” in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, vol., no., pp. 1–8, 5–7 March 2015. doi:10.1109/ICECCT.2015.7226107
Abstract: To assign access privilege for a particular authorized user without disclosing his/her identity for accessing the relevant information and protecting sensitive information from unauthorized access, fine-grained access control for cloud networks is very much essential. Recently many fine grained access control schemes for cloud environments have been proposed in literature using a promising cryptographic solution called attribute-based encryption (ABE). But in a real time scenario, most of them inevitably suffer from lots of serious drawbacks as they are incapable to fulfil some essential security, performance and functionality requirements like user anonymity, users revocation, attributes revocation and user collusion resilience. Moreover these schemes use asymmetric key encryptions which required higher computational cost. In this paper, we present an efficient and secure fine grained access control scheme applicable for cloud networks using symmetric key encryption. Our scheme is able to fulfil fine-grained access control over any type of cloud networks and also ensures that any particular legitimate user can access only that information for which he/she is permitted to access them without compromising user identity. The proposed scheme is resilient against most of all strong attacks such as replay attack and user collusion resilience attack. Moreover our scheme has the provision for user and attributes revocation efficiently. Furthermore, our proposed scheme is light-weight because it uses symmetric key encryption and decryption algorithms. Finally we have shown that our scheme requires lower computation costs and provides higher security compare to other related schemes.
Keywords: authorisation; cloud computing; cryptography; ABE; asymmetric key encryptions; attribute-based encryption; attributes revocation; cloud networks; computational cost; cryptographic solution; data access security; dynamic fine grained access control scheme; functionality requirement; performance requirement; replay attack; security requirement; symmetric key decryption algorithm; symmetric key encryption algorithm; user anonymity; user collusion resilience attack; users revocation; Computational modeling; Cryptography; Diseases; Attribute based encryption; Bilinear maps; Cloud object; Elliptic curve cryptography; Fine grained access control; Group based access control (ID#: 15-7440)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226107&isnumber=7225915

 

Emmart, N.; Weems, C., “Pushing the Performance Envelope of Modular Exponentiation Across Multiple Generations of GPUs,” in Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, vol., no., pp. 166–176, 25–29 May 2015. doi:10.1109/IPDPS.2015.69
Abstract: Multiprecision modular exponentiation is a key operation in popular encryption schemes such as RSA, but is computationally expensive. Contexts such as handling many secure web connections in a server can demand higher rates of exponent operations than a traditional multicore can support. Graphics processors offer an opportunity to accelerate batches of exponent calculations both by executing them in parallel as well as through parallelizing the operations within the multiprecision arithmetic itself. However, obtaining performance close to the theoretical peak can be extremely challenging. Furthermore, each new generation of GPU architecture can require a substantially different approach to achieve maximum performance. In this paper we show how we improve modular exponentiation performance over prior results by at factors ranging from 2.6 to 24, across generations of NVIDIA GPU, from compute capability 1.1 onward. Of particular interest is the parameter space that must be searched to find the optimal configuration of memory layout, launch geometry, and algorithm for each architecture at different problem sizes. Our efforts have resulted in a set of tools for generating library functions in the PTX assembly language and searching to find these optima. From our experience it can be argued that a new programming paradigm is needed to achieve full performance potential on core library components as GPUs evolve through multiple generations.
Keywords: assembly language; graphics processing units; software libraries; GPU architecture; NVIDIA GPU; PTX assembly language; RSA; compute capability; core library components; encryption schemes; exponent operations; graphics processing unit; graphics processors; launch geometry; library functions; memory layout; multiprecision modular exponentiation performance; multiprocessing arithmetic; optimal configuration; secure Web connections; Computational modeling; Computer architecture; Generators; Graphics processing units; Load modeling; Message systems; Registers; GPU accelerated modular exponentiation; SSL acceleration with GPUs; asymmetric cryptography on GPUs; modular exponentiation (ID#: 15-7441)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161506&isnumber=7161257

 

Bhave, Aparna; Jajoo, S.R., “Secure Communication in Wireless Sensor Networks Using Hybrid Encryption Scheme and Cooperative Diversity Technique,” in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp. 1–6, 9–10 Jan. 2015. doi:10.1109/ISCO.2015.7282235
Abstract: A Wireless Sensor Network (WSN) is a versatile sensing system suitable to cover a wide variety of applications. Power efficiency, security and reliability are the major areas of concern in designing WSNs[3][7]. More-over, one of the most important issues in WSN design is to assure the reliability of the collected data which often involve security issues in the wireless communications. This project mainly focused on development of hybrid encryption scheme which combines a symmetric and asymmetric encryption algorithms for secure key exchange and enhanced cipher text security. This paper comments on comparison of performance in terms of bit error rate for symmetric, Asymmetric and hybrid encryption schemes implemented in wireless sensor networks. Test Results shows decrease in bit error rate by using hybrid encryption scheme as compare to symmetric and asymmetric schemes alone. Increase in number of sensors further minimizes bit error rate and improves performance. Alamouti codes with Space time block codes are most widely used transmission mechanism in WSN. Extended space time block codes (ECBSTBC) have better signal to noise ratio improvement when compared with sensor selection scheme. Proposed system uses ECBSTBC codes for transmission[8].
Keywords: Elliptic curve cryptography; Indexes; Reliability; Resource management; Wireless sensor networks; AES; ECBSTBC; ECC; Hybrid Encryption; WSN (ID#: 15-7442)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282235&isnumber=7282219

 

Cui, Baojiang; Xi, Tao, “Security Analysis of Openstack Keystone,” in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, vol., no., pp. 283–288, 8–10 July 2015.  doi:10.1109/IMIS.2015.44
Abstract: As a base platform of cloud computing, Open Stack’s has getting more and more attention. Keystone is the key component of Open Stack, we analyze the security issues of Keystone and find some vulnerabilities of it and then, we propose a new authentication model using both symmetric encryption and asymmetric encryption. Through the security test of new model, it is proved that the new model is much safer than the original one.
Keywords: Analytical models; Authentication; Cloud computing; Computational modeling; Encryption; Servers; Open Stack; authentication model; cloudcomputing; keystone; security (ID#: 15-7443)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284961&isnumber=7284886

 

Prasanna M., D.; Roopa, S.R., “SSO-Key Distribution Center Based Implementation Using Serpent Encryption Algorithm for Distributed Network (Securing SSO in Distributed Network),” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 425–429, 12–13 June 2015. doi:10.1109/IADCC.2015.7154743
Abstract: Network of things is expanding day by day, with that security, flexibility and ease of use became concern of the user. We do have a different technique to full fill user’s demands. Some of them are: Single Sign On (SSO), Cryptography techniques like RSA-VES, Serpent etc. In this paper an effort is made to provide all mentioned facilities to the user. Single Sign On (SSO) authorizes user only once and allow user to access multiple services and make the system very easy to use and also provides flexibility to use multiple programs or applications. The combination of cryptographic algorithms: Serpent (symmetric encryption) and RSA-VES (asymmetric encryption) which are known as one of the secured cryptographic algorithms are used with “session time” which makes communication very secure and reliable.
Keywords: public key cryptography; RSA-VES; SSO-key distribution center; Serpent encryption algorithm; cryptography techniques; distributed network; securing SSO; single sign on; Authentication; Ciphers; Encryption; Public key; Servers; authorization; distributed computer networks; information security; private key; public key; single sign-on (SSO); symmetric key (ID#: 15-7444)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154743&isnumber=7154658

 

Lei Zhang; Qianhong Wu; Domingo-Ferrer, J.; Bo Qin; Zheming Dong, “Round-Efficient and Sender-Unrestricted Dynamic Group Key Agreement Protocol for Secure Group Communications,” in Information Forensics and Security, IEEE Transactions on, vol. 10, no. 11, pp. 2352–2364, Nov. 2015. doi:10.1109/TIFS.2015.2447933
Abstract: Modern collaborative and group-oriented applications typically involve communications over open networks. Given the openness of today’s networks, communications among group members must be secure and, at the same time, efficient. Group key agreement (GKA) is widely employed for secure group communications in modern collaborative and group-oriented applications. This paper studies the problem of GKA in identity-based cryptosystems with an emphasis on round-efficient, sender-unrestricted, member-dynamic, and provably secure key escrow freeness. The problem is resolved by proposing a one-round dynamic asymmetric GKA protocol which allows a group of members to dynamically establish a public group encryption key, while each member has a different secret decryption key in an identity-based cryptosystem. Knowing the group encryption key, any entity can encrypt to the group members so that only the members can decrypt. We construct this protocol with a strongly unforgeable stateful identity-based batch multisignature scheme. The proposed protocol is shown to be secure under the k -bilinear Diffie-Hellman exponent assumption.
Keywords: cryptographic protocols; digital signatures; private key cryptography; public key cryptography; collaborative group-oriented applications; group member communication; identity-based cryptosystem; identity-based cryptosystems; k-bilinear Diffie-Hellman exponent assumption; one-round dynamic asymmetric GKA protocol; public group encryption key; round-efficient sender-unrestricted dynamic group key agreement protocol; round-efficient-sender-unrestricted-member-dynamic provably secure key;  secret decryption key; secure group communications; strongly unforgeable stateful identity-based batch multisignature scheme; Collaboration; Encryption; Games; Protocols; Receivers; Communication security; asymmetric group key agreement; communication security; identity-based cryptography; key management (ID#: 15-7445)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128688&isnumber=7235010

 

Sagar, V.; Kumar, K., “A Symmetric Key Cryptography Using Genetic Algorithm and Error Back Propagation Neural Network,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1386–1391, 11–13 March 2015. doi: (not provided)
Abstract: In conventional security mechanism, cryptography is a process of information and data hiding from unauthorized access. It offers the unique possibility of certifiably secure data transmission among users at different remote locations. Cryptography is used to achieve availability, privacy and integrity over different networks. Usually, there are two categories of cryptography i.e. symmetric and asymmetric. In this paper, we have proposed a new symmetric key algorithm based on genetic algorithm (GA) and error back propagation neural network (EBP-NN). Genetic algorithm has been used for encryption and neural network has been used for decryption process. Consequently, this paper proposes an easy cryptographic secure algorithm for communication over the public computer networks.
Keywords: backpropagation; computer network security; cryptography; genetic algorithms; neural nets; EBP-NN; GA; certifiably secure data transmission; cryptographic secure algorithm; data hiding; data integrity; data privacy; decryption process; error back propagation neural network; genetic algorithm; information hiding; public computer networks; remote locations; symmetric key cryptography; unauthorized access; Artificial neural networks; Encryption; Genetic algorithms; Neurons; Receivers; symmetric key (ID#: 15-7446)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100476&isnumber=7100186

 

Chung, Eric; Joy, Joshua; Gerla, Mario, “DiscoverFriends: Secure Social Network Communication in Mobile Ad Hoc Networks,” in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, vol., no., pp. 7–12, 24–28 Aug. 2015. doi:10.1109/IWCMC.2015.7288929
Abstract: This paper presents a secure communication application called DiscoverFriends. Its purpose is to communicate to a group of online friends while bypassing their respective social networking servers under a mobile ad hoc network environment. DiscoverFriends leverages Bloom filters and a hybrid encryption technique with a self-organized public-key management scheme to securely identify friends and provide authentication. Firstly, Bloom filters provide a space-efficient means of security for friend discovery. Secondly, a combination of asymmetric and symmetric encryptions algorithms utilizes both benefits to provide increased security at lower computational cost. Thirdly, a self-organized public-key management scheme helps authenticate users using a trust graph in an infrastructureless setting. With the use of Wi-Fi Direct technology, an initiator is able to establish an ad hoc network where friends can connect to within the application. DiscoverFriends was analyzed under two threat models: replay attacks and eavesdropping by a common friend. Finally, the paper evaluates the application based on storage usage and processing.
Keywords: Encryption; Facebook; IEEE 802.11 Standard; Public key; Servers; Ad hoc networks; Mobile communication; Security; Social computing (ID#: 15-7447)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288929&isnumber=7288920

 

Anju, S.; Joseph, J., “Location Based Service Applications to Secure Locations with Dual Encryption,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp.1–4, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193061
Abstract: Location Based Service Applications (LBSAs) are becoming a part of our lives. Through these applications the users can interact with the physical world and get all data they want. eg; Foursquare. But it misuses it in many ways by extracting personal information of users and lead to many threats. To improve the location privacy we use the technique LocX. Here, the location and data related with it are encrypted before store in different servers. So a third party cannot track the location from the server and the server itself cannot see the location. In addition, to improve the security in location points and data points we introduce dual encryption method in LocX. Asymmetric keys are used to encrypt the data with two keys public key and user’s private key. But in LocX random inexpensive symmetric keys are used.
Keywords: data privacy; mobile computing; mobility management (mobile radio); private key cryptography; public key cryptography; Foursquare; LBSA; LocX random inexpensive symmetric keys; LocX technique; dual encryption method; location based service applications; location privacy; personal information; public key; user private key; Encryption; Indexes; Privacy; Public key; Servers; Asymmetric; Encrypt; Location Privacy (ID#: 15-7448)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193061&isnumber=7192777

 

Jain, V.; Sharma, P.; Sharma, S., “Cryptographic Algorithm on Multicore Processor: A Review,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 241–244, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164703
Abstract: Cryptography involves different cryptographic algorithm that contributes in the security purpose of the programs. The cryptography algorithms are divided into two parts symmetric and asymmetric. There are many different challenges to implement cryptography algorithm specially throughput in terms of time execution. So, it is important that it runs with minimum encryption and decryption time and hence improvise the time efficiency. In this paper, we study and analyze the performance of different cryptographic algorithm on multicore processors and also we explore the performance in sequential and parallel implementation of cryptography algorithm on multi core processors. In this review paper we have given the summary of different research papers on cryptography and briefed about some cryptographic tools.
Keywords: cryptography; multiprocessing systems; parallel processing; asymmetric cryptography algorithms; cryptographic algorithm; cryptographic tools; decryption time; encryption time; multicore processor; parallel implementation; sequential implementation; time execution; Algorithm design and analysis; Encryption; Graphics processing units; Multicore processing; Parallel processing; Software algorithms; AES; DES; RSA; core; parallelism (ID#: 15-7449)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164703&isnumber=7164643

 

Mathur, R.; Agarwal, S.; Sharma, V., “Solving Security Issues in Mobile Computing Using Cryptography Techniques — A Survey,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 492–497, 15–16 May 2015. doi:10.1109/CCAA.2015.7148427
Abstract: Advancements in wireless networking have initiated the idea of mobile computing, where the user does not have to be bound to a fixed physical location in order to exchange any information. The benefits of on-the-move connectivity are many but there exist serious networking and security issues that need to be solved before realizing the full benefits of mobile computing. In this paper, we discuss the security problems arising from the technological advances in mobile computing as well as their solution. Using cryptographic techniques, information can be provided adequate security, over the air. Encryption of data takes place using symmetric or asymmetric cryptography algorithms depending on the area of application and level of security required. The paper presents a comparative survey on AES, DES, IDEA, RC2, BLOWFISH, RSA encrypting algorithms with their advantages and disadvantages over different parameters. Finally, we derive conclusion over security solutions through these algorithms that may be worked upon to enhance the information and network security in future.
Keywords: cryptography; mobile computing; AES; BLOWFISH; DES; IDEA; RC2; RSA encrypting algorithm; asymmetric cryptography algorithm; cryptographic technique; data encryption; mobile computing; security issue; wireless networking; Ciphers; Encryption; Mobile communication; Mobile computing; Receivers; Cryptography; DSA; Mobile computing; RC-2; RSA; Security issues (ID#: 15-7450)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148427&isnumber=7148334

 

Zhe Fan; Byron Choi; Jianliang Xu; Bhowmick, Sourav S., “Asymmetric Structure-Preserving Subgraph Queries for Large Graphs,” in Data Engineering (ICDE), 2015 IEEE 31st International Conference on, vol., no., pp. 339–350, 13–17 April 2015. doi:10.1109/ICDE.2015.7113296
Abstract: One fundamental type of query for graph databases is subgraph isomorphism queries (a.k.a subgraph queries). Due to the computational hardness of subgraph queries coupled with the cost of managing massive graph data, outsourcing the query computation to a third-party service provider has been an economical and scalable approach. However, confidentiality is known to be an important attribute of Quality of Service (QoS) in Query as a Service (QaaS). In this paper, we propose the first practical private approach for subgraph query services, asymmetric structure-preserving subgraph query processing, where the data graph is publicly known and the query structure/topology is kept secret. Unlike other previous methods for subgraph queries, this paper proposes a series of novel optimizations that only exploit graph structures, not the queries. Further, we propose a robust query encoding and adopt the novel cyclic group based encryption so that query processing is transformed into a series of private matrix operations. Our experiments confirm that our techniques are efficient and the optimizations are effective.
Keywords: graph theory; matrix algebra; optimisation; query processing; QaaS; asymmetric structure-preserving subgraph query processing; graph databases; novel cyclic group based encryption; novel optimizations; private matrix operations; quality of service; query as a service; robust query encoding; subgraph isomorphism queries; third-party service provider; Cascading style sheets; Computational modeling; Encoding; Encryption; Optimization; Privacy (ID#: 15-7451)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113296&isnumber=7113253

 

Mundhenk, P.; Steinhorst, S.; Lukasiewycz, M.; Fahmy, S.A.; Chakraborty, S., “Lightweight Authentication for Secure Automotive Networks,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no.,
pp. 285–288, 9–13 March 2015. doi: (not provided)
Abstract: We propose a framework to bridge the gap between secure authentication in automotive networks and on the internet. Our proposed framework allows runtime key exchanges with minimal overhead for resource-constrained in-vehicle networks. It combines symmetric and asymmetric cryptography to establish secure communication and enable secure updates of keys and software throughout the lifetime of the vehicle. For this purpose, we tailor authentication protocols for devices and authorization protocols for streams to the automotive domain. As a result, our framework natively supports multicast and broadcast communication. We show that our lightweight framework is able to initiate secure message streams fast enough to meet the real-time requirements of automotive networks.
Keywords: Internet; authorisation; automobiles; computer network security; cryptographic protocols; asymmetric cryptography; authentication protocols; authorization protocols; broadcast communication; lightweight authentication; multicast communication; resource-constrained in-vehicle networks; runtime key exchanges; secure authentication; secure automotive networks; secure message streams; Authentication; Authorization; Automotive engineering; Encryption; Vehicles (ID#: 15-7452)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092398&isnumber=7092347

 

Saguansakdiyotin, N.; Hiranvanichakorn, P., “Dynamic Broadcast Encryption Based on Braid Groups,” in Defence Technology (ACDT), 2015 Asian Conference on, vol., no., pp. 119–126, 23–25 April 2015. doi:10.1109/ACDT.2015.7111596
Abstract: Broadcast encryption is the scheme that a sender encrypts messages for a designated group of receivers, and sends the ciphertexts by broadcast over the networks. Dynamic environment of broadcast encryption needs to support conditions which someone can join a group, members of a group can leave a group, a group can join other groups, and a group can be separated into smaller groups dynamically. In this paper, we propose a dynamic broadcast encryption scheme based on braid groups cryptosystem, which is an alternative method in the public key cryptosystem and can reduce the computational cost. Join, leave, merge, and partition protocols are stated in our scheme to deal with dynamic environment. Our scheme has some advantages over the scheme using symmetric group key in that the sender can be someone inside or outside the group and it gets rid of the problem in distributing a secret key.
Keywords: broadcast communication; cost reduction; cryptographic protocols; braid groups cryptosystem; ciphertext; computational cost reduction; dynamic broadcast encryption scheme; protocol; secret key distribution; Barium; Elliptic curve cryptography; Encryption; Protocols; Asymmetric Group Key Agreement; Braid Groups; Broadcast Encryption (ID#: 15-7453)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7111596&isnumber=7111567

 

Selim, B.; Chan Yeob Yeun, “Key Management for the MANET: A Survey,” in Information and Communication Technology Research (ICTRC), 2015 International Conference on, vol., no., pp. 326–329, 17–19 May 2015. doi:10.1109/ICTRC.2015.7156488
Abstract: Mobile Ad Hoc Networks (MANETs) are a spontaneous network of mobile devices that do not rely on any kind of fixed infrastructure. In these networks, all the network operations are carried out by nodes themselves. The self-organizing nature of MANETs makes them suitable for many applications and hence, considerable effort has been put into securing this type of networks. Secure communication in a network is determined by the reliability of the key management scheme, which is responsible for generating, distributing and maintaining encryption/decryption keys among the nodes. In this paper we investigate key management schemes for MANETs. We give an overview of available key management schemes for symmetric key, asymmetric key, group key and hybrid key cryptography.
Keywords: cryptography; telecommunication network management; telecommunication security; MANET key management; asymmetric key cryptography; encryption-decryption keys; group key cryptography; hybrid key cryptography; key management scheme; mobile ad hoc networks; mobile devices; network operations; secure communication; symmetric key cryptography; Encryption; Mobile ad hoc networks; Peer-to-peer computing; Public key; Servers; Key management; MANET; asymmetric key; group key; symmetric key (ID#: 15-7454)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7156488&isnumber=7156393

 

Adnan, Syed Farid Syed; Isa, Mohd Anuar Mat; Rahman, Khairul Syazwan Ali; Muhamad, Mohd Hanif; Hashim, Habibah, “Simulation of RSA and ElGamal Encryption Schemes Using RF Simulator,” in Computer Applications & Industrial Electronics (ISCAIE), 2015 IEEE Symposium on, vol., no., pp. 124–128, 12–14 April 2015. doi:10.1109/ISCAIE.2015.7298340
Abstract: Sensor nodes commonly rely on wireless transmission media such as radio frequency (RF) and typically run on top of CoAP and TFTP protocols which do not provide any security mechanisms. One method of securing sensor node communication over RF is to implement a lightweight encryption scheme. In this paper, a RF Simulator developed in our previous publication which simulates lightweight security protocols for RF device communication using Rivest Shamir Alderman (RSA) and ElGamal encryption scheme are presented. The RF Simulator can be used for a fast trial and debugging for any new wireless security protocol before the actual or experimental implementation of the protocol in the physical devices. In our previous work, we have shown that the RF Simulator can support a cryptographer or engineer in performing quick product test and development for Diffe-Hellman Key Exchange Protocol (DHKE) and Advanced Encryption Standard (AES) protocols. In this work, we present the simulation result of implementing the RSA and ElGamal encryption scheme using SW-ARQ protocol in sensor node RF communication. The simulation was performed on the same testbed as previous works which comprised of HP DC7800 PCs and ARM Raspberry Pi boards.
Keywords: Encryption; Error analysis; Protocols; Radio frequency; Wireless sensor networks; Asymmetric; Cryptography; ElGamal; IOT; Lightweight; Privacy; RF; RSA; Radio Frequency; Raspberry Pi; Simulation; Simulator; Stop and Wait ARQ; TFTP; Trivial File Transfer Protocol; Trust; UBOOT; Wi-Fi (ID#: 15-7455)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298340&isnumber=7298288

 

Hoa Quoc Le; Hung Phuoc Truong; Hoang Thien Van; Thai Hoang Le, “A New Pre-Authentication Protocol in Kerberos 5: Biometric Authentication,” in Computing & Communication Technologies—Research, Innovation, and Vision for the Future (RIVF), 2015 IEEE RIVF International Conference on, vol., no., pp. 157–162, 25–28 Jan. 2015. doi:10.1109/RIVF.2015.7049892
Abstract: Kerberos is a well-known network authentication protocol that allows nodes to communicate over a non-secure network connection. After Kerberos is used to prove the identity of objects in client-server model, it will encrypt all of their communications in following steps to assure privacy and data integrity. In this paper, we modify the initial authentication exchange in Kerberos 5 by using biometric data and asymmetric cryptography. This proposed method creates a new preauthentication protocol in order to make Kerberos 5 more secure. Due to the proposed method, the limitation of password-based authentication in Kerberos 5 is solved. It is too difficult for a user to repudiate having accessed to the application. Moreover, the mechanism of user authentication is more convenient. This method is a strong authentication scheme that is against several attacks.
Keywords: cryptographic protocols; data integrity; data privacy; message authentication; Kerberos 5; asymmetric cryptography; attacks; authentication exchange; biometric authentication; biometric data; client-server model; encryption; network authentication protocol; nonsecure network connection; objects identity; password-based authentication; preauthentication protocol; privacy; user authentication; Authentication; Bioinformatics; Cryptography; Fingerprint recognition; Protocols; Servers; Kerberos; biometric; cryptography; fingerprint (ID#: 15-7456)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7049892&isnumber=7049862
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Digital Signatures and Privacy 2015

 

 
SoS Logo

Digital Signatures and Privacy

2015


A digital signature is one of the most common ways to authenticate. Using a mathematical scheme, the signature assures the reader that the message was created and sent by a known sender. But not all signature schemes are secure. The research challenge is to find new and better ways to protect, transfer, and utilize digital signatures. The articles cited here were published in 2015, and discuss both theory and practice related to privacy issues.



Jalalzai, M.H.; Shahid, W.B.; Iqbal, M.M.W., “DNS Security Challenges and Best Practices to Deploy Secure DNS with Digital Signatures,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no.,
pp. 280–285, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058517
Abstract: This paper is meant to discuss the DNS security vulnerabilities and best practices to address DNS security challenges. The Domain Name System (DNS) is the foundation of internet which translates user friendly domains, named based Resource Records (RR) into corresponding IP addresses and vice-versa. Nowadays usage of DNS services are not merely for translating domain names, but it is also used to block spam, email authentication like DKIM and the latest DMARC, the TXT records found in DNS are mainly about improving the security of services. So, virtually almost every internet application is using DNS. If not works properly then whole internet communication will collapse. Therefore security of DNS infrastructures is one of the core requirements for any organization in current cyber security arena. DNS are favorite place for attackers due to huge loss of its outcome. So breach in DNS security will in resultant affects the trust worthiness of whole internet. Therefore security of DNS is paramount, in case DNS infrastructure is vulnerable and compromised, organizations lose their revenue, they face downtime, customer dissatisfaction, privacy loss, confront legal challenges and many more. As we know that DNS is now become the largest distributed database, but initially at the time of DNS design the only goal was to provide scalable and available name resolution service but its security perspectives were not focused and overlooked at that time. So there are number of security flaws exist and there is an urgent requirement to provide some additional mechanism for addressing known vulnerabilities. From these security challenges, most important one is DNS data integrity and availability. For this purpose we introduced cryptographic framework that is configured on open source platform by incorporating DNSSEC with Bind DNS software which addresses integrity and availability issues of DNS by establishing DNS chain of trust using digitally signed DNS data.
Keywords: Internet; computer network security; cryptography; data integrity; data privacy; digital signatures; distributed databases; public domain software; Bind DNS software; DKIM; DMARC; DNS availability issues; DNS chain; DNS data integrity; DNS design; DNS infrastructures; DNS security; DNS security vulnerabilities; DNS services; DNSSEC; IP addresses; Internet application; Internet communication; Internet trustworthiness; cryptographic framework; customer dissatisfaction; cyber security arena; digitally signed DNS data; distributed database; domain name system; email authentication; index TXT services; named based resource records; open source platform; privacy loss; secure DNS; security flaws; user friendly domains; Best practices; Computer crime; Cryptography; Internet; Servers; Software; DNS Security; DNS Vulnerabilities; Digital Signatures; Network and Computer Security; PKI (ID#: 15-7413)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058517&isnumber=7058466

 

Vegh, L.; Miclea, L., “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” in Control Systems and Computer Science (CSCS), 2015 20th International Conference on, vol., no., pp. 294–299, 27–29 May 2015. doi:10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: authorisation; cryptography; digital signatures; steganography; access control; cyber attacks; cyber-physical system; security architecture; security requirement; system security; Computer architecture; Digital signatures; Encryption; Public key; access control; cyber-physical systems; digital signatures; multi-agent systems  (ID#: 15-7414)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168445&isnumber=7168393

 

Cattaneo, Giuseppe; Catuogno, Luigi; Petagna, Fabio; Roscigno, Gianluca, “Reliable Voice-Based Transactions over VoIP Communications,” in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, vol., no., pp. 101–108, 8–10 July 2015. doi:10.1109/IMIS.2015.20
Abstract: Nowadays, plenty of sensitive transactions are provided through call centers such as bank operations, goods purchase and contracts signing. Beside communication confidentiality, two major issues are raised within this scenario: (1) each peer should be ensured about the identity of the other, (2) each peer should be guaranteed that the other could not cheat about the communication contents. Current telecommunication (TLC) networks offer (built-in) or allow several mechanisms to enhance security and reliability of human conversations, leveraging strong authentication mechanisms and cryptography. However, in most cases these solutions require complex deployments, mainly based on proprietary technologies which are often characterized by high costs and low flexibility. In this paper we present a solution for strong peers authentication and non-repudiability of human conversations through Voice over IP (VoIP) networks. Our solution achieves low costs and high interoperability as it is built on top of open standard technologies. Authentication and key-agreement mechanism are based on X.509 digital certificates and full PKCS#11 compliant cryptographic tokens. As proof of concept, we present and discuss a prototype implementation.
Keywords: Authentication; Cryptography; Digital signatures; Protocols; Prototypes; Standards; Non-repudiable Communication; Peer Authentication; Privacy; Smart Card; VoIP (ID#: 15-7415)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284934&isnumber=7284886

 

Qu, F.; Wu, Z.; Wang, F.-Y.; Cho, W., “A Security and Privacy Review of VANETs,” in Intelligent Transportation Systems, IEEE Transactions on, vol. 16, no. 6, pp. 2985–2966, Dec. 2015. doi:10.1109/TITS.2015.2439292
Abstract: Vehicular ad hoc networks (VANETs) have stimulated interest in both academic and industry settings because, once deployed, they would bring a new driving experience to drivers. However, communicating in an open-access environment makes security and privacy issues a real challenge, which may affect the large-scale deployment of VANETs. Researchers have proposed many solutions to these issues. We start this paper by providing background information of VANETs and classifying security threats that challenge VANETs. After clarifying the requirements that the proposed solutions to security and privacy problems in VANETs should meet, on the one hand, we present the general secure process and point out authentication methods involved in these processes. Detailed survey of these authentication algorithms followed by discussions comes afterward. On the other hand, privacy preserving methods are reviewed, and the tradeoff between security and privacy is discussed. Finally, we provide an outlook on how to detect and revoke malicious nodes more efficiently and challenges that have yet been solved.
Keywords: Authentication; Cryptography; Digital signatures; Privacy; Vehicles; Vehicular ad hoc networks; VANETs; privacy; security; survey (ID#: 15-7416)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127003&isnumber=4358928

 

Bos, J.W.; Costello, C.; Naehrig, M.; Stebila, D., “Post-Quantum Key Exchange for the TLS Protocol from the Ring Learning with Errors Problem,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 553–570, 17–21 May 2015. doi:10.1109/SP.2015.40
Abstract: Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today’s commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
Keywords: cryptographic protocols; digital signatures; public key cryptography; quantum cryptography; 2-core desktop computer; Apache Web server; R-LWE cipher suites; RLWE-ECDSA-AES128-GCM-SHA256 HTTPS; RSA keys; TLS protocol; authentication; commercial certificate authority; elliptic curve Diffie-Hellman; elliptic curve digital signatures; handshake size; lattice-based cryptographic primitives; lattice-based key exchange; nonquantum-safe key exchange; open SSL library; post-quantum key exchange; quantum attackers; quantum computers; ring learning with error problem; security level; transport layer security protocol; Authentication; Computers; Cryptography; Lattices; Protocols; Quantum computing; Transport Layer Security (TLS); key exchange; learning with errors; post-quantum (ID#: 15-7417)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163047&isnumber=7163005

 

Arifeen, F.U.; Siddiqui, R.A.; Ashraf, S.; Waheed, S., “Inter-Cloud Authentication through X.509 for Defense Organization,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 299–306, 13–17 Jan. 2015. doi:10.1109/IBCAST.2015.7058520
Abstract: Over the recent years of research in cloud computing, different approaches are adopted for Inter-Cloud Authentication. These approaches give successful results in identifying the authentic request. Defense organization communicate with each other's through legitimate requests. For establishing a security and privacy, a PKI based authentication model is needed. This paper signifies a new approach in implementing cloud based PKI authentication inside the existing infrastructure of defense organization. As security is the prime concern for any organization and its implementation requirement varies from organization to organization, each and every organization embrace their own policies to implement it. The problem of understanding each other’s security policies is a huge barrier and challenge for existing IT infrastructure for implementation purposes. Requirement to establish Inter-Cloud Authentication is made possible through this PKI based model which ensures all five security services i.e. confidentiality, integrity, authentication, digital signature and non-repudiation. This PKI model is a multi-domain atmosphere between various defense organization and their Data Centers (DC) for the facilitation and resource provisioning inside the cloud platform. This model utilizes the existing network infrastructure composed of high intercommunication traffic between various Data Centers of defense organization. In this model, a nationwide Certification Authority (CA) is implemented in the Inter-Cloud infrastructure and all other Data Centers are inter-communicated through this mechanism having different authentication approaches for legitimate access through the X.509 Certificates.
Keywords: cloud computing; computer centres; computer network security; data integrity; data privacy; digital signatures; organisational aspects; public key cryptography; telecommunication traffic; IT infrastructure; PKI based authentication model; X.509 certification authority; cloud based PKI authentication; cloud platform; data center; data confidentiality; defense organization; digital signature; intercloud authentication; intercloud infrastructure; intercommunication traffic; multidomain atmosphere; network infrastructure; non-repudiation; resource provisioning; security policies; security services; Hardware; Organizations; Public key cryptography; Software; Virtual private networks; Certification Authority (CA); Data Centers; Inter-Cloud; Master CA; Public Key Infrastructure (PKI); VPN; X.509 Certificate Services (ID#: 15-7418)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058520&isnumber=7058466

 

Rashid, F.; Miri, A.; Woungang, I., “A Secure Video Deduplication Scheme in Cloud Storage Environments Using H.264 Compression,” in Big Data Computing Service and Applications (BigDataService), 2015 IEEE First International Conference on, vol., no., pp. 138–146, March 30 2015–April 2 2015. doi:10.1109/BigDataService.2015.15
Abstract: Due to the rapidly increasing amounts of digital data produced worldwide, multi-user cloud storage systems are becoming very popular and Internet users are approaching cloud storage providers (CSPs) to upload their data in the clouds. Among these data, digital videos are fairly huge in terms of storage cost and size, and techniques that can help reducing the cloud storage cost and size are always desired. This paper argues that data reduplication can ease the problem of BigData storage by identifying and removing the duplicate copies from the cloud storages. Although reduplication maximizes the storage space and minimizes the storage costs, it comes with serious issues of data privacy and security. Though the users desire to save some cost by allowing the CSP to deduplicate their data, they do not want the CSP to wane the privacy of their data. In this paper, a scheme is proposed that achieves a secure video reduplication in cloud storage environments. Its design consists of embedding a partial convergent encryption along with a unique signature generation scheme into a H.264 video compression scheme. The partial convergent encryption scheme is meant to ensure that the proposed scheme is secured against a semi-honest CSP, the unique signature generation scheme is meant to enable a classification of the encrypted compressed video data in such a way that the reduplication can be efficiently performed on them. Experimental results and security analysis are provided to validate the stated goals.
Keywords: Big Data; cloud computing; cryptography; data compression; digital signatures; video coding; Big Data storage; CSP; H.264 video compression; cloud storage provider; data reduplication; partial convergent encryption scheme; signature generation scheme; video deduplication scheme security; Cloud computing; Compression algorithms; Encryption; Streaming media; Transforms; BigData security; cloud storage provider; group of pictures (GOP); partial convergent encryption; signature generation; video deduplication (ID#: 15-7419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184874&isnumber=7184847

 

Jung Yeon Hwang; Liqun Chen; Hyun Sook Cho; DaeHun Nyang, “Short Dynamic Group Signature Scheme Supporting Controllable Linkability,” in Information Forensics and Security, IEEE Transactions on, vol. 10, no. 6, pp.1109–1124, June 2015. doi:10.1109/TIFS.2015.2390497
Abstract: The controllable linkability of group signatures introduced by Hwang et al. enables an entity who has a linking key to find whether or not two group signatures were generated by the same signer, while preserving the anonymity. This functionality is very useful in many applications that require the linkability but still need the anonymity, such as sybil attack detection in a vehicular ad hoc network and privacy-preserving data mining. In this paper, we present a new group signature scheme supporting the controllable linkability. The major advantage of this scheme is that the signature length is very short, even shorter than this in the best-known group signature scheme without supporting the linkability. We have implemented our scheme in both a Linux machine with an Intel Core2 Quad and an iPhone4. We compare the results with a number of existing group signature schemes. We also prove security features of our scheme, such as anonymity, traceability, nonframeability, and linkability, under a random oracle model.
Keywords: data privacy; digital signatures; Intel Core2 Quad; Linux machine; anonymity feature; anonymity preservation; controllable linkability; iPhone4; linkability feature; linking key; nonframeability feature; random oracle model; security features; short dynamic group signature scheme; signature length; traceability feature; Indexes; Joining processes; Privacy; Protocols; Public key; Synchronous digital hierarchy; Anonymity; Group signature; Linkability; group signature; linkability; privacy (ID#: 15-7420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006753&isnumber=7084215

 

Jun Zhou; Xiaodong Lin; Xiaolei Dong; Zhenfu Cao, “PSMPA: Patient Self-Controllable and Multi-Level Privacy-Preserving Cooperative Authentication in Distributed m-Healthcare Cloud Computing System,” in Parallel and Distributed Systems, IEEE Transactions on, vol. 26, no. 6, pp.1693–1703, June 1 2015. doi:10.1109/TPDS.2014.2314119
Abstract: Distributed m-healthcare cloud computing system significantly facilitates efficient patient treatment for medical consultation by sharing personal health information among healthcare providers. However, it brings about the challenge of keeping both the data confidentiality and patients’ identity privacy simultaneously. Many existing access control and anonymous authentication schemes cannot be straightforwardly exploited. To solve the problem, in this paper, a novel authorized accessible privacy model (AAPM) is established. Patients can authorize physicians by setting an access tree supporting flexible threshold predicates. Then, based on it, by devising a new technique of attribute-based designated verifier signature, a patient self-controllable multi-level privacy-preserving cooperative authentication scheme (PSMPA) realizing three levels of security and privacy requirement in distributed m-healthcare cloud computing system is proposed. The directly authorized physicians, the indirectly authorized physicians and the unauthorized persons in medical consultation can respectively decipher the personal health information and/or verify patients’ identities by satisfying the access tree with their own attribute sets. Finally, the formal security proof and simulation results illustrate our scheme can resist various kinds of attacks and far outperforms the previous ones in terms of computational, communication and storage overhead.
Keywords: authorisation; cloud computing; data privacy; digital signatures; health care; mobile computing; patient treatment; AAPM; PSMPA; access tree; attribute sets; attribute-based designated verifier signature; authorized accessible privacy model; data confidentiality; distributed m-healthcare cloud computing system; formal security proof; healthcare providers; medical consultation; patient identity privacy; patient self-controllable and multilevel privacy-preserving cooperative authentication; personal health information sharing; privacy requirement; security requirement; threshold predicates; Authentication; Cloud computing; Computational modeling; Medical services; Privacy; Public key; access control; distributed cloud computing; m-healthcare system; security and privacy (ID#: 15-7421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779640&isnumber=7106033

 

Han Yiliang; Lu Wanyi, “Attribute Based Generalized Signcryption for Online Social Network,” in Control Conference (CCC), 2015 34th Chinese, pp. 6434–6439, 28–30 July 2015. doi:10.1109/ChiCC.2015.7260653
Abstract: Online social network has brought varies and flexible secure demands. Attribute based generalized signcryption (ABGSC) could bring the combined or separate confidentiality and authentication adaptively, and eliminate the bottleneck of traditional public key encryption. We proposed an attribute based generalized signcryption with non-monotonic access structures, which can perform signcryption, encryption and signature adaptively. The non-monotonic access structure is used to realize the “OR”, “AND”, “NEG” and “Threshold” operations; the Inner Product is used to achieve constant cipher text. Under the encryption mode, the cipher text length is 2|G|+nm, under the signature mode it is 3|G|+nm, and under the signcryption mode is 5|G|+nm, so we can improve the efficiency greatly. Under the q-DBDHE assumption in the stand model, the scheme is proved confidential under the signcryption and encryption mode, and is proved unforgeable under the signcryption and signature mode.
Keywords: digital signatures; public key cryptography; social networking (online); ABGSC; attribute based generalized signcryption; authentication; cipher text length; combined confidentiality; constant cipher text; encryption mode; nonmonotonic access structure; online social network; public key encryption; q-DBDHE assumption; separate confidentiality; signature mode; signcryption mode; Ciphers; Encryption; Games; Privacy; Social network services; attribute based encryption; generalized signcryption; signcryption (ID#: 15-7422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260653&isnumber=7259602

 

Hefeeda, M.; ElGamal, T.; Calagari, K.; Abdelsadek, A., “Cloud-Based Multimedia Content Protection System,” in IEEE Transactions on Multimedia, vol. 17, no. 3, pp. 420–433, March 2015. doi:10.1109/TMM.2015.2389628
Abstract: We propose a new design for large-scale multimedia content protection systems. Our design leverages cloud infrastructures to provide cost efficiency, rapid deployment, scalability, and elasticity to accommodate varying workloads. The proposed system can be used to protect different multimedia content types, including 2-D videos, 3-D videos, images, audio clips, songs, and music clips. The system can be deployed on private and/or public clouds. Our system has two novel components: (i) method to create signatures of 3-D videos, and (ii) distributed matching engine for multimedia objects. The signature method creates robust and representative signatures of 3-D videos that capture the depth signals in these videos and it is computationally efficient to compute and compare as well as it requires small storage. The distributed matching engine achieves high scalability and it is designed to support different multimedia objects. We implemented the proposed system and deployed it on two clouds: Amazon cloud and our private cloud. Our experiments with more than 11,000 3-D videos and 1 million images show the high accuracy and scalability of the proposed system. In addition, we compared our system to the protection system used by YouTube and our results show that the YouTube protection system fails to detect most copies of 3-D videos, while our system detects more than 98% of them. This comparison shows the need for the proposed 3-D signature method, since the state-of-the-art commercial system was not able to handle 3-D videos.
Keywords: cloud computing; data privacy; digital signatures; image matching; video signal processing; 3D video handling; 3D video signature; Amazon cloud; YouTube protection system; cloud infrastructure; cloud-based multimedia content protection system; distributed matching engine; multimedia content types; private cloud; public cloud; Cloud computing; Engines; Multimedia communication; Streaming media; Three-dimensional displays; Videos; 3-D video; cloud applications; depth signatures; video copy detection; video fingerprinting (ID#: 15-7423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7005542&isnumber=7041253

 

Alzahrani, A.J.; Ghorbani, A.A., “Real-Time Signature-Based Detection Approach for SMS Botnet,” in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, vol., no., pp. 157–164, 21–23 July 2015. doi:10.1109/PST.2015.7232968
Abstract: As an open platform for mobile electronic devices, Android is experiencing a steady growth in the number of published applications (apps). Features of the Android platform have caught the attention of malicious users who have targeted the Short Message Service (SMS) to abuse its permissions. Various types of attack, referred to as botnets, can be executed without the user’s knowledge by taking advantage of SMS messages, such as sending text message spam, transferring all command and control (C&C) instructions, launching denial-of-service (DoS) attacks, sending premium-rate SMS messages, or distributing malicious applications via URLs embedded in text messages. In this paper, we propose a real-time signature-based detection mechanism to combat SMS botnets, in which we first apply pattern-matching detection approaches for incoming and outgoing SMS text messages, and then use rule-based techniques to label unknown SMS messages as suspicious or normal. This approach was evaluated using over 12,000 test messages. It was able to detect all 747 malicious SMS messages in the dataset (100% detection rate with no false negatives). It also flagged 351 SMS messages as suspicious.
Keywords: computer crime; computer network security; digital signatures; electronic messaging; invasive software; mobile computing; pattern matching; smart phones; Android platform; C&C instructions; DoS attacks; SMS botnets; SMS messages labelling; URL; attack types; command and control instructions; denial-of-service attacks; malicious applications distribution; malicious users; mobile electronic devices; pattern-matching detection; premium-rate SMS messages; real-time signature-based detection approach; rule-based techniques; short message service; text message spam; Feature extraction; Malware; Mobile communication; Pattern matching; Smart phones; Android; Botnet Detection; Mobile Malware; SMS (ID#: 15-7424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232968&isnumber=7232940

 

Xinyi Huang; Liu, J.K.; Shaohua Tang; Yang Xiang; Kaitai Liang; Li Xu; Jianying Zhou, “Cost-Effective Authentic and Anonymous Data Sharing with Forward Security,” in IEEE Transactions on Computers, vol. 64, no. 4, pp. 971–983, April 1 2015. doi:10.1109/TC.2014.2315619
Abstract: Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to reauthenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.
Keywords: cloud computing; data analysis; digital signatures; public key cryptography; storage management; ID-based ring signature; PKI; analysis purpose; anonymous data sharing; certificate verification; cost-effective authentic data sharing; forward security; identity-based ring signature; public key infrastructure; shared data analysis; storage; Data handling; Educational institutions; Information management; Public key; Smart grids; Authentication; data sharing; smart grid (ID#: 15-7425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782632&isnumber=7056411

 

Yan Liu; Xiaoming Hu; Xiaojun Zhang; Jian Wang; Yinchun Yang, “Efficient Strong Designated Verifier Proxy Signature Scheme with Low Cost,” in Advanced Communication Technology (ICACT), 2015 17th International Conference on, vol., no.,
pp. 568–572, 1–3 July 2015. doi:10.1109/ICACT.2015.7224860
Abstract: Designated verifier proxy signature is a special proxy signature where only the designated verifier can verify the validity. So far, numerous strong designated verifier proxy signature (DVPST) schemes have been proposed. However, many of them have been pointed out to be vulnerable to the forgery attack or have high computational cost. In 2012, Lin et al. proposed a highly efficient and strong DVPST scheme in the random oracle model. However, in this paper, we address that Lin et al.’s strong DVPST scheme does not satisfy the unforgeability. In order to overcome this problem, based on the hardness of discrete logarithm problem, we present a new strong DVPST scheme. We also make a detail analysis and comparison on the security and efficiency with other related schemes including Lin et al.’s scheme. The analysis shows that our scheme not only has excellent performance in terms of computation cost and communication cost but also possesses unforgeability, non-transferability and privacy of signer’s identity.
Keywords: computational complexity; data privacy; digital signatures; DVPST scheme; communication cost; computation cost; designated verifier proxy signature scheme; discrete logarithm problem hardness; forgery attack; random oracle model; signer identity privacy; Computational efficiency; Computers; Forgery; Privacy; Public key; Voltage control; information security; proxy signature; strong designated verifier signature (ID#: 15-7426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224860&isnumber=7224736

 

Jun Zhou; Zhenfu Cao; Xiaolei Dong; Xiaodong Lin, “TR-MABE: White-Box Traceable and Revocable Multi-Authority Attribute-Based Encryption and Its Applications to Multi-Level Privacy-Preserving E-Healthcare Cloud Computing Systems,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2398–2406, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218628
Abstract: Cloud-assisted e-healthcare systems significantly facilitate the patients to outsource their personal health information (PHI) for medical treatment of high quality and efficiency. Unfortunately, a series of unaddressed security and privacy issues dramatically impede its practicability and popularity. In e-healthcare systems, it is expected that only the primary physicians responsible for the patients treatment can not only access the PHI content but verify the real identity of the patient. Secondary physicians participating in medical consultation and/or research tasks, however, are only permitted to view or use the content of the protected PHI, while unauthorized entities cannot obtain anything. Existing work mainly focuses on patients conditional identity privacy by exploiting group signatures, which are very computationally costly. In this paper, we propose a white-box traceable and revocable multi-authority attribute-based encryption named TR-MABE to efficiently achieve multilevel privacy preservation without introducing additional special signatures. It can efficiently prevent secondary physicians from knowing the patients identity. Also, it can efficiently track the physicians who leak secret keys used to protect patients identity and PHI. Finally, formal security proof and extensive simulations demonstrate the effectiveness and practicability of our proposed TR-MABE in e-healthcare cloud computing systems.
Keywords: cloud computing; cryptography; data privacy; digital signatures; health care; medical information systems; PHI; TR-MABE encryption; cloud-assisted e-healthcare systems; e-healthcare cloud computing systems; electronic health care; formal security proof; group signatures; medical consultation; medical research; medical treatment; multilevel privacy-preserving e-healthcare; patient identity; patient treatment; patients conditional identity privacy; personal health information; privacy issue; security issue; white-box traceable revocable multiauthority attribute-based encryption; Access control; Cloud computing; Encryption; Medical services; Privacy; Cloud computing system; attribute-based encryption; multi-authority; traceability and revocability (ID#: 15-7427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218628&isnumber=7218353

 

Zhang, A.; Chen, J.; Hu, R.Q.; Qian, Y., “SeDS: Secure Data Sharing Strategy for D2D Communication in LTE-Advanced Networks,” in IEEE Transactions on Vehicular Technology, vol. PP, no. 99, pp.1–1, 23 March 2015. doi:10.1109/TVT.2015.2416002
Abstract: Security and availability are two crucial issues in Device-to-Device (D2D) communication with its fast development in 4G LTE-Advanced network. In this paper, we propose a secure data sharing protocol, which merges the advantages of public key cryptography and symmetric encryption, to achieve data security in D2D communication. Specifically, public key based digital signature combing with mutual authentication mechanism of cellular network guarantees the entity authentication, transmission non-repudiation, traceability, data authority as well as integrity. Meanwhile, symmetric encryption is employed to ensure data confidentiality. A salient feature of the proposed protocol is that it can detect free-riding attack by keeping a record of the current status for the user equipments (UEs) and realize reception non-repudiation by key hint transmission between the UE and evolved NodeB, thus improving the system availability. Furthermore, various delay models are established in different application scenarios to seek the optimal initial service providers for achieving tradeoff between cost and availability. Extensive analysis and simulations demonstrate that the proposed protocol is indeed an efficient and practical solution for secure data sharing mechanism for D2D communication.
Keywords: Authentication; Availability; Data privacy; Encryption; Indexes; Protocols; D2D; LTE-Advanced network; availability (ID#: 15-7428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7065294&isnumber=4356907

 

Yu, C.-M.; Chen, C.-Y.; Chao, H.-C., “Privacy-Preserving Multikeyword Similarity Search over Outsourced Cloud Data,” in IEEE Systems Journal, vol. PP, no. 99, pp.1–10, March 2015. doi:10.1109/JSYST.2015.2402437
Abstract: The amount of data generated by individuals and enterprises is rapidly increasing. With the emerging cloud computing paradigm, the data and corresponding complex management tasks can be outsourced to the cloud for the management flexibility and cost savings. Unfortunately, as the data could be sensitive, the direct data outsourcing would have the problem of privacy leakage. The encryption can be used, before the data outsourcing, with the concern that the operations can still be accomplished by the cloud. We consider the multikeyword similarity search over outsourced cloud data. In particular, with the consideration of the text data only, multiple keywords are specified by the user. The cloud returns the files containing more than a threshold number of input keywords or similar keywords, where the similarity here is defined according to the edit distance metric. We propose three solutions, where blind signature provides the user access privacy, and a novel use of Bloom filter’s bit pattern provides the speedup of search task at the cloud side. Our final design to achieve the search is secure against insider threats and efficient in terms of the search time at the cloud side. Performance evaluation and analysis are used to demonstrate the practicality of our proposed solutions.
Keywords: Authorization; Data privacy; Educational institutions; Encryption; Keyword search; Privacy; Cloud computing; counterintelligence; outsourced data; privacy; similarity search (ID#: 15-7429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7055228&isnumber=4357939

 

Xiaochuan Lin; Ruizhong Wei, “Vector Signature for Face Recognition,” in Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE 19th International Conference on, vol., no., pp. 413–418, 6–8 May 2015. doi:10.1109/CSCWD.2015.7230995
Abstract: In this paper, we proposed a vector signature scheme for face recognition. Using the signature, both the database size and communication bandwidth can be reduced. And the privacy of the face image is also improved. Some experimental implementation shows the potential of the new proposal.
Keywords: data privacy; digital signatures; face recognition; communication bandwidth; database size; face image privacy; face recognition; vector signature; Calibration; Databases; Euclidean distance; Active Appearance Model; Face recognition; Signature (ID#: 15-7430)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230995&isnumber=7230917

 

Petit, J.; Schaub, F.; Feiri, M.; Kargl, F., “Pseudonym Schemes in Vehicular Networks: A Survey,” in IEEE Communications Surveys & Tutorials,  vol. 17, no. 1, pp. 228–255, Firstquarter 2015. doi:10.1109/COMST.2014.2345420
Abstract: Safety-critical applications in cooperative vehicular networks require authentication of nodes and messages. Yet, privacy of individual vehicles and drivers must be maintained. Pseudonymity can satisfy both security and privacy requirements. Thus, a large body of work emerged in recent years, proposing pseudonym solutions tailored to vehicular networks. In this survey, we detail the challenges and requirements for such pseudonym mechanisms, propose an abstract pseudonym lifecycle, and give an extensive overview and categorization of the state of the art in this research area. Specifically, this survey covers pseudonym schemes based on public key and identity-based cryptography, group signatures and symmetric authentication. We compare the different approaches, give an overview of the current state of standardization, and identify open research challenges.
Keywords: data privacy; digital signatures; intelligent transportation systems; public key cryptography; vehicular ad hoc networks; abstract pseudonym lifecycle; cooperative vehicular networks; group signatures; identity-based cryptography; intelligent transport systems; message authentication; node authentication; privacy requirements; pseudonym mechanisms; pseudonym solutions; public key cryptography; safety-critical applications; security requirements; symmetric authentication; vehicular networks; Authentication; Licenses; Privacy; Tutorials; Vehicles; Vehicular ad hoc networks; Anonymity; ITS; V2X communications; VANET; authentication; privacy; pseudonym; unlinkability; untraceability (ID#: 15-7431)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873216&isnumber=7061782 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Efficient Encryption 2015

 

 
SoS Logo

Efficient Encryption

2015

 
The term “efficient encryption” generally refers to the speed of an algorithm, that is, the time needed to complete the calculations to encrypt or decrypt a coded text. The research cited here shows a broader concept and looks at both hardware and software, as well as power consumption. The research relates to cyber physical systems, resilience, and composability. The works cited here appeared in 2015.



Wu, Q.; Qin, B.; Zhang, L.; Domingo-Ferrer, J.; Farras, O.; Manjón, J.A., “Contributory Broadcast Encryption with Efficient Encryption and Short Ciphertexts,” in IEEE Transactions on Computers, vol. 65, no. 2, pp. 466– 479, February 2016. doi:10.1109/TC.2015.2419662
Abstract: Broadcast encryption (BE) schemes allow a sender to securely broadcast to any subset of members but require a trusted party to distribute decryption keys. Group key agreement (GKA) protocols enable a group of members to negotiate a common encryption key via open networks so that only the group members can decrypt the ciphertexts encrypted under the shared encryption key, but a sender cannot exclude any particular member from decrypting the ciphertexts. In this paper, we bridge these two notions with a hybrid primitive referred to as contributory broadcast encryption (ConBE). In this new primitive, a group of members negotiate a common public encryption key while each member holds a decryption key. A sender seeing the public group encryption key can limit the decryption to a subset of members of his choice. Following this model, we propose a ConBE scheme with short ciphertexts. The scheme is proven to be fully collusion-resistant under the decision n-Bilinear Diffie-Hellman Exponentiation (BDHE) assumption in the standard model. Of independent interest, we present a new BE scheme that is aggregatable. The aggregatability property is shown to be useful to construct advanced protocols.
Keywords: Encryption; Games; Protocols; Public key; Receivers; Broadcast encryption; contributory broadcast encryption; group key agreement; provable security (ID#: 15-7648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079389&isnumber=4358213

 

Wang Jing; Huang Chuanhe; Yang Kan; Wang Jinhai; Wang Xiaomao; Chen Xi, “MAVP-FE: Multi-Authority Vector Policy Functional Encryption with Efficient Encryption and Decryption,” in China Communications, vol. 12, no. 6, pp. 126–140,
June 2015. doi:10.1109/CC.2015.7122471
Abstract: In cloud, data access control is a crucial way to ensure data security. Functional encryption (FE) is a novel cryptographic primitive supporting fine-grained access control of encrypted data in cloud. In FE, every ciphertext is specified with an access policy, a decryptor can access the data if and only if his secret key matches with the access policy. However, the FE cannot be directly applied to construct access control scheme due to the exposure of the access policy which may contain sensitive information. In this paper, we deal with the policy privacy issue and present a mechanism named multi-authority vector policy (MAVP) which provides hidden and expressive access policy for FE. Firstly, each access policy is encoded as a matrix and decryptors can only obtain the matched result from the matrix in MAVP. Then, we design a novel function encryption scheme based on the multi-authority spatial policy (MAVP-FE), which can support privacy-preserving yet non-monotone access policy. Moreover, we greatly improve the efficiency of encryption and decryption in MAVP-FE by shifting the major computation of clients to the outsourced server. Finally, the security and performance analysis show that our MAVP-FE is secure and efficient in practice.
Keywords: authorisation; cloud computing; cryptography; data privacy; storage management; MAVP-FE; access policy; ciphertext; cloud storage; cryptographic primitive; data access control; data security; decryption; decryptor; encrypted data; fine-grained access control; multiauthority spatial policy; multiauthority vector policy functional encryption; policy privacy; privacy-preserving; secret key; Access control; Data privacy; Encryption; Iron; Privacy; functional encryption; hidden access policy; efficiency (ID#: 15-7649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122471&isnumber=7122467

 

Ping Wang; Xi Zhang; Genshe Chen, “Efficient Quantum-Error Correction for QoS Provisioning over QKD-Based Satellite Networks,” in Wireless Communications and Networking Conference (WCNC), 2015 IEEE, vol., no., pp. 2262-2267,
9–12 March 2015. doi:10.1109/WCNC.2015.7127819
Abstract: Quantum cryptography is one of the most promising technologies for guaranteeing the absolute security in communications over various advanced networks, including fiber networks and wireless networks. In particular, quantum key distribution is an efficient encryption scheme on implementing secure satellite communications between satellites and ground stations. However, it faces many new challenges such as high attenuation and low polarization-preserving capability or extreme sensitivity to the environment. In order to guarantee the quality of service (QoS) provisioning of quantum communications over 3D satellite networks, we need to focus on the security problem and throughput efficiency through correcting the errors resulted from the objective and adversary influences. To overcome these problems, we model the noisy quantum channel and implement an efficient quantum error correction scheme to ensure the security and increase the quantum throughput efficiency in QKD-based satellite networks. The simulation results obtained show that our proposed efficient QEC scheme for QoS guarantee outperforms the other existing quantum error correction schemes in terms of security and the quantum throughput efficiency.
Keywords: quantum cryptography; satellite communication; 3D satellite networks; QKD-based satellite networks; QoS provisioning; efficient encryption scheme; efficient quantum-error correction scheme; fiber networks; low polarization-preserving capability; quantum cryptography; quantum key distribution; wireless networks; Error correction; Quality of service; Satellite communication; Satellites; Security; Throughput; Quantum communications; quality of service (QoS); quantum error correction (QEC); quantum key distribution (QKD); quantum throughput efficiency; satellite networks security (ID#: 15-7650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127819&isnumber=7127309

 

Azzaz, Mohamed Salah; Hadjem, Tarek; Tanougast, Camel, “A Novel Parametric Discrete Chaos-Based Switching System for Image Encryption,” in Computer, Information and Telecommunication Systems (CITS), 2015 International Conference on, vol., no., pp. 1–4, 15–17 July 2015. doi:10.1109/CITS.2015.7297718
Abstract: This paper presents an efficient encryption technique for image. The designed chaos-based key generator provides a random and complex dynamic behavior and can change it automatically via a random-like switching rule. The proposed encryption scheme is called PDCSS (Parametric Discrete Chaos-based Switching System). The performances of this technique were evaluated in terms of data security. The originality of this new scheme is that it allows a low-cost image encryption for embedded systems applications. Simulation results have shown the effectiveness of this technique, and it can thereafter, ready for a hardware implementation.
Keywords: Chaotic communication; Correlation; Encryption; Entropy; Logistics; Chaos; encryption; image; security (ID#: 15-7651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297718&isnumber=7297712

 

Thomas, M.; Panchami, V., “An Encryption Protocol for End-to-End Secure Transmission of SMS,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–6, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159471
Abstract: Short Message Service (SMS) is a process of transmission of short messages over the network. SMS is used in daily life applications including mobile commerce, mobile banking, and so on. It is a robust communication channel to transmit information. SMS pursue a store and forward way of transmitting messages. The private information like passwords, account number, passport number, and license number are also send through message. The traditional messaging service does not provide security to the message since the information contained in the SMS transmits as plain text from one mobile phone to other. This paper explains an efficient encryption protocol for securely transmitting the confidential SMS from one mobile user to other which serves the cryptographic goals like confidentiality, authentication and integrity to the messages. The Blowfish encryption algorithm gives confidentiality to the message, the EasySMS protocol is used to gain authentication and MD5 hashing algorithm helps to achieve integrity of the messages. Blowfish algorithm utilizes only less battery power when compared to other encryption algorithms. The protocol prevents various attacks, including SMS disclosure, replay attack, man-in-the middle attack and over the air modification.
Keywords: cryptographic protocols; data integrity; data privacy; electronic messaging; message authentication; mobile radio; Blowfish encryption algorithm; SMS disclosure; encryption protocol; end-to-end secure transmission; man-in-the middle attack; message confidentiality; message integrity; mobile phone; over the air modification; replay attack; short message service; Authentication; Encryption; Mobile communication; Protocols; Throughput; Asymmetric Encryption; Cryptography; Secure Transmission; Symmetric Encryption (ID#: 15-7652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159471&isnumber=7159156

 

Beck, Martin, “Randomized Decryption (RD) Mode of Operation for Homomorphic Cryptography — Increasing Encryption, Communication and Storage Efficiency,” in Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, vol., no., pp. 220–226, April 26 2015–May 1 2015.  doi:10.1109/INFCOMW.2015.7179388
Abstract: Consider a client who wants to outsource storage and computation of sensitive information to a not fully trusted third party. Secure computation algorithms like homomorphic encryption are typically used to solve this issue, but introduce overhead through randomization and thus ciphertext expansion. Furthermore, encryption may be infeasible for small, resource constrained devices. We present a mode of operation for homomorphic cryptographic systems such that pseudo-random values are decrypted and used as a pseudo one time pad to construct a stream cipher. As a result efficient encryption, transmission and storage of sensitive data, is achieved. Most importantly, the resulting ciphertexts can be trivially transformed into an homomorphic encryption of the concealed data. The resulting scheme is proven to be as secure as the underlying pseudo-random number generator and homomorphic cryptographic system. A performance evaluation shows the benefits and costs of our approach.
Keywords: cryptography; storage management; trusted computing; RD mode; ciphertext expansion; communication; homomorphic cryptography; homomorphic encryption; pseudorandom values; randomization; randomized decryption; secure computation algorithms; sensitive information; storage efficiency; stream cipher; trusted third party; Ciphers; Encryption; Generators; Polynomials; Servers (ID#: 15-7653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7179388&isnumber=7179273

 

Kaikai Liu; Min Li; Xiaolin Li, “Hiding Media Data via Shaders: Enabling Private Sharing in the Clouds,” in Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, vol., no., pp. 122–129, June 27 2015–July 2 2015. doi:10.1109/CLOUD.2015.26
Abstract: In the era of Cloud and Social Networks, mobile devices exhibit much more powerful abilities for big media data storage and sharing. However, many users are still reluctant to share/store their data via clouds due to the potential leakage of confidential or private information. Although some cloud services provide storage encryption and access protection, privacy risks are still high since the protection is not always adequately conducted from end-to-end. Most customers are aware of the danger of letting data control out of their hands, e.g., Storing them to YouTube, Flickr, Facebook, Google+. Because of substantial practical and business needs, existing cloud services are restricted to the desired formats, e.g., Video and photo, without allowing arbitrary encrypted data. In this paper, we propose a format-compliant end-to-end privacy-preserving scheme for media sharing/storage issues with considerations for big data, clouds, and mobility. To realize efficient encryption for big media data, we jointly achieve format-compliant, compression-independent and correlation-preserving via multi-channel chained solutions under the guideline of Markov cipher. The encryption and decryption process is integrated into an image/video filter via GPU Shader for display-to-display full encryption. The proposed scheme makes big media data sharing/storage safer and easier in the clouds.
Keywords: Big Data; cloud computing; cryptography; data encapsulation; data privacy; social networking (online); GPU Shader; Markov cipher; big media data storage; cloud networks; cloud services; format-compliant end-to-end privacy-preserving scheme; image filter; media data hiding; multi-channel chained solutions; private sharing; social networks; video filter; Data privacy; Encryption; Image coding; Media; Privacy; Chaotic Mapping; Cloud; Encryption; Format-Compliant; Media Data (ID#: 15-7654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214036&isnumber=7212169

 

de Clercq, R.; Roy, S.S.; Vercauteren, F.; Verbauwhede, I., “Efficient Software Implementation of Ring-LWE Encryption,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 339–344, 9–13 March 2015. doi: (not provided)
Abstract: Present-day public-key cryptosystems such as RSA and Elliptic Curve Cryptography (ECC) will become insecure when quantum computers become a reality. This paper presents the new state of the art in efficient software implementations of a post-quantum secure public-key encryption scheme based on the ring-LWE problem. We use a 32-bit ARM Cortex-M4F microcontroller as the target platform. Our contribution includes optimization techniques for fast discrete Gaussian sampling and efficient polynomial multiplication. Our implementation beats all known software implementations of ring-LWE encryption by a factor of at least 7. We further show that our scheme beats ECC-based public-key encryption schemes by at least one order of magnitude. At medium-term security we require 121 166 cycles per encryption and 43 324 cycles per decryption, while at a long-term security we require 261 939 cycles per encryption and 96 520 cycles per decryption. Gaussian sampling is done at an average of 28.5 cycles per sample.
Keywords: Gaussian processes; optimisation; public key cryptography; sampling methods; ARM Cortex-M4F microcontroller; ECC; RSA; decryption; elliptic curve cryptography; fast discrete Gaussian sampling; medium-term security; optimization techniques; polynomial multiplication; post-quantum secure public-key encryption scheme; public-key cryptosystems; quantum computers; ring-LWE encryption; software implementation; word length 32 bit; Encryption; Gaussian distribution; Indexes; Polynomials; Registers; Software; Table lookup; discrete Gaussian sampling; number theoretic transform; post-quantum secure; public-key encryption; ring learning with errors (ring-LWE); software implementation (ID#: 15-7655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092411&isnumber=7092347

 

Verma, S.; Pillai, P.; Yim Fun Hu, “Energy-Efficient Privacy Homomorphic Encryption Scheme for Multi-Sensor Data in WSNs,” in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, vol., no., pp. 1–6,
6–10 Jan. 2015. doi:10.1109/COMSNETS.2015.7098719
Abstract: The recent advancements in wireless sensor hardware ensures sensing multiple sensor data such as temperature, pressure, humidity, etc. using a single hardware unit, thus defining it as multi-sensor data communication in wireless sensor networks (WSNs). The in-processing technique of data aggregation is crucial in energy-efficient WSNs; however, with the requirement of end-to-end data confidentiality it may prove to be a challenge. End-to-end data confidentiality along with data aggregation is possible with the implementation of a special type of encryption scheme called privacy homomorphic (PH) encryption schemes. This paper proposes an optimized PH encryption scheme for WSN integrated networks handling multi-sensor data. The proposed scheme ensures light-weight payloads, significant energy and bandwidth consumption along with lower latencies. The performance analysis of the proposed scheme is presented in this paper with respect to the existing scheme. The working principle of the multi-sensor data framework is also presented in this paper along with the appropriate packet structures and process. It can be concluded that the scheme proves to decrease the payload size by 56.86% and spend an average energy of 8-18 mJ at the aggregator node for sensor nodes varying from 10-50 thereby ensuring scalability of the WSN unlike the existing scheme.
Keywords: cryptography; data privacy; telecommunication computing; telecommunication network reliability; wireless sensor networks; PH encryption schemes; WSN scalability; aggregator node; bandwidth consumption; data aggregation; end-to-end data confidentiality; energy 8 mJ to 18 mJ; energy consumption; energy-efficient privacy homomorphic encryption scheme; humidity; in-processing technique; light-weight payloads; multisensor data communication; packet structures; performance analysis; pressure; sensor nodes; single hardware unit; temperature; Bandwidth; Cryptography; Informatics; Tin; Wireless sensor networks; contiki-OS; energy-efficient WSNs (ID#: 15-7656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098719&isnumber=7098633

 

Gupta, S.; Jain, A., “Efficient Image Encryption Algorithm Using DNA Approach,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 726–731, 11–13 March 2015. doi: (not provided)
Abstract: DNA computing is a new computational field which harnesses the immense parallelism, high density information and low power dissipation that brings probable challenges and opportunities to conventional cryptography. In recent years, many image encryption algorithms have been proposed using DNA solicit but many are not secure as such. In this regard, this paper proposes an improved and efficient algorithm to encrypt a grayscale image of any size based on DNA sequence addition operation. The original image is encrypted into two phases. In the first phase, the intermediate cipher is obtained by addition of the DNA sequence matrix and masking matrix. In the second phase, pixel values are scrambled to make it more robust. In this way the original image is encrypted. The results of simulated experiment and security analysis of the proposed image encryption algorithm, evaluated from histogram analysis and key sensitivity analysis, depicts that scheme not only can attain good encryption but can also hinder exhaustive attack and statistical attack. Thus, results are passable.
Keywords: biocomputing; cryptography; image processing; sensitivity analysis; DNA computing; DNA masking matrix; DNA sequence addition operation; DNA sequence matrix; exhaustive attack; grayscale image; histogram analysis; image encryption algorithm; intermediate cipher; key sensitivity analysis; security analysis; statistical attack; Algorithm design and analysis; DNA; Encryption; Histograms; Image coding; Matrix converters; DNA encoding; DNA sequence addition and subtraction; chaotic maps; image encryption (ID#: 15-7657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100345&isnumber=7100186

 

Wenfeng Zhao; Yajun Ha; Alioto, M., “AES Architectures for Minimum-Energy Operation and Silicon Demonstration in 65nm with Lowest Energy per Encryption,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., 
pp . 2349–2352, 24–27 May 2015. doi:10.1109/ISCAS.2015.7169155
Abstract: Lightweight encryption circuits are crucial to ensure adequate information security in emerging millimeter-scale platforms for the Internet of Things, which are required to deliver moderately high throughput under stringent area and energy budgets. This requires the adoption of specialized AES accelerators, as they offer orders of magnitude energy improvements over microcontroller-based implementations. In this paper, we present the architectural exploration of lightweight AES accelerators with the goal of minimizing the energy consumption. Also, the lower bound of the number of cycles per encryption in lightweight AES designs is estimated as a function of the number of available S-boxes. Combined with sub-/near-threshold circuit techniques, we present a low-cost ultra energy-efficient AES encryption core for cubic-millimeter platforms. Our test chip achieves high energy efficiency of 0.83 pJ/bit at 0.32 V, which outperforms the state-of-the-art low-cost AES designs by 7×.
Keywords: CMOS integrated circuits; cryptography; low-power electronics; Internet of Things; S-boxes; architectural exploration; cubic-millimeter platforms; energy consumption; information security; lightweight AES accelerators; lightweight AES designs; lightweight encryption circuits; low-cost ultra energy-efficient AES encryption core; millimeter-scale platforms; size 65 nm; sub-near-threshold circuit techniques; voltage 0.32 V; Clocks; Computer architecture; Delays; Encryption; Logic gates; Throughput; Transforms; Advanced Encryption Standard; energy-efficient architecture; sub-/near-threshold operation; ultra-low energy
(ID#: 15-7658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169155&isnumber=7168553

 

Mingchu Li; Wei Jia; Cheng Guo; Weifeng Sun; Xing Tan, “LPSSE: Lightweight Phrase Search with Symmetric Searchable Encryption in Cloud Storage,” in Information Technology - New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 174–178, 13–15 April 2015. doi:10.1109/ITNG.2015.33
Abstract: Security of cloud storage has drawn more and more concerns. In the searchable encryption, many previous solutions can let people retrieve the documents containing single keyword or conjunctive keywords by storing encrypted documents with data indexes. However, searching documents with a phrase or consecutive keywords is still a remained open problem. In this paper, using the relative positions, we propose an efficient scheme LPSSE with symmetric searchable encryption that can support encrypted phrase searches in cloud storage. Our scheme is based on non-adaptive security definition by R. Curtmola and with lower costs of transmission and storage than existing systems. Furthermore, we combine some components of currently efficient search engines and our functions to complete a prototype. The experiment results also show that our scheme LPSSE is available and efficient.
Keywords: cloud computing; cryptography; storage management; LPSSE scheme; cloud storage security; data indexes; document retrieval; encrypted document storage; lightweight phrase search with symmetric searchable encryption; nonadaptive security; search engines; Arrays; Cloud computing; Encryption; Indexes; Servers; Cloud storage; Lightweight searchable encryption scheme; Phrase search; Searchable encryption; Symmetry (ID#: 15-7659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113468&isnumber=7113432

 

Emura, Keita; Kanaoka, Akira; Ohta, Satoshi; Takahashi, Takeshi, “A KEM/DEM-Based Construction for Secure and Anonymous Communication,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, no., pp. 680–681, 1–5 July 2015. doi:10.1109/COMPSAC.2015.54
Abstract: Public key infrastructure has been widely used, but its certificate must be removed when a corresponding public key is sent via an anonymous communication channel in order to maintain anonymity. It is because the certificate contains information of the key holder, and that contradicts anonymity. A secure and anonymous communication protocol was proposed to address this issue, where end-to-end encryption and anonymous authentication are achieved simultaneously. It applies identity-based encryption (IBE) for packet encryption. However, because IBE requires heavy pairing computations, this protocol is inefficient and approximately 20 times slower than that of SSL communications. In this paper, we propose a more efficient, secure, and anonymous communication protocol, which achieves the same security level as the IBE-based protocol does. The protocol is exempted from pairing computation for establishing a secure channel by applying hybrid encryption instead of IBE. We implement the protocol and show that it is more efficient (overall approximately 1.2 times faster) than the IBE-based protocol. In particular, the decryption algorithm of our protocol is several hundred times faster than that of the IBE-based protocol.
Keywords: Authentication; Communication channels; Encryption; Identity-based encryption; Protocols (ID#: 15-7660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273462&isnumber=7273299

 

Gopularam, B.P.; Nalini, N., “On the Optimization of Key Revocation Schemes for Network Telemetry Data Distribution,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 536–540, 12–13 June 2015. doi:10.1109/IADCC.2015.7154765
Abstract: Consider a cloud deployment where the organizational network pertaining to a tenant having routers, switches sharing network telemetry data on regular basis. Among different ways of managing networks flow-based network monitoring is most sought after approach because of accuracy and economies of scale. In the event of host compromise the device credentials are revoked thereby disabling its ability to read future communications. Broadcast Encryption techniques having strong key revocation mechanism can be used in this context. Waters et. al [?] is one the broadcast encryption schemes which facilitate efficient sharing using small size keys and the related Attribute-Based Encryption scheme uses dual encryption technique and is capable of handling non-monotonous access structure again with small keys. In this paper we experiment with broadcast encryption and attribute based encryption schemes with real-time network telemetry data and provide detailed analysis of performance. Though the original scheme provides smaller keys, few changes to the algorithm improves the performance and efficiency and makes it acceptable for large scale usage. We found the optimized scheme is 20% more performant than initial scheme.
Keywords: IP networks; cloud computing; computer network management; computer network performance evaluation; computer network security; cryptography; data privacy; private key cryptography; telecommunication network routing; telecommunication traffic; attribute-based encryption scheme; broadcast encryption schemes; cloud deployment; device credential revocation; dual-encryption technique; efficiency improvement; key revocation scheme optimization; network telemetry data distribution; networks flow-based network monitoring management; nonmonotonous access structure handling; organizational network; performance improvement; routers; small-size key sharing; switches; Encryption; Libraries; Measurement; Optimization; Telemetry; Attribute Based Encryption; Broadcast Encryption; Key Revocation; Log privacy (ID#: 15-7661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154765&isnumber=7154658

 

Kanbara, Yusuke; Teruya, Tadanori; Kanayama, Naoki; Nishide, Takashi; Okamoto, Eiji, “Software Implementation of a Pairing Function for Public Key Cryptosystems,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7293019
Abstract: There are various protocols using pairing operations such as ID-Based Encryption and Functional Encryption in recent years. These protocols could not be realized by using conventional public key encryption. Hence, pairing plays an important role in modern society. However, implementing an efficient pairing library needs a deep knowledge of mathematics and is a not- trivial task. In order to solve this problem, we released Pairing Library called TEPLA (University of Tsukuba Elliptic Curve and Pairing Library). This library can compute pairings, finite field arithmetic and elliptic curve operations. TEPLA is implemented by using Beuchat et al.'s algorithm in PAIRING2010. After a year Aranha et al. proposed a new method of compute pairings. The method of Arahna et al. computes a pairing faster than Beuchat et al.’s algorithm by about 28%–34%. In this work, we actually implement a pairing library with reference by using Aranha et al.’s algorithm in EUROCRYPT2011 to demonstrate the speed of Aranha el al. and offer the pairing library as an open source software.
Keywords: Electronic mail; Elliptic curves; Encryption; Jacobian matrices; Libraries; Protocols (ID#: 15-7662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293019&isnumber=7292885

 

Rawat, Aditya; Gupta, Ipshita; Goel, Yash; Sinha, Nishith, “Permutation Based Image Encryption Algorithm Using Block Cipher Approach,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1877–1882, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275892
Abstract: Encryption is a process of hiding significant data so as to prevent unauthorized access and ensure confidentiality of data. It is widely used to transmit data across networks ensuring secure communication. This paper aims at improving the security and efficiency of image encryption by using a highly efficient shuffle based encryption algorithm and an equivalent decryption algorithm based on random values obtained by using pseudorandom number generator. Due to the immense amount of possibilities of instances of the encrypted image which can be generated by shuffling the pixels as a block (or on a pixel by pixel basis), the algorithm proves to be highly impervious to brute force attacks. The proposed algorithm has been examined using multiple analysis methods to support its robustness for achieving good results.
Keywords: Algorithm design and analysis; Arrays; Correlation; Encryption; Generators; Analysis codes; Block Cipher; Confusion and Diffusion based shuffling; Image Encryption & Decryption; Pseudorandom Numbers Generators (ID#: 15-7663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275892&isnumber=7275573

 

Chandrasekaran, J.; Jayaraman, T.S., “A Fast and Secure Image Encryption Algorithm Using Number Theoretic Transforms and Discrete Logarithms,” in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, vol., no., pp. 1–5, 19–21 Feb. 2015. doi:10.1109/SPICES.2015.7091491
Abstract: Many of the Internet applications such as video conferencing, military image databases, personal online photograph albums and cable television require a fast and efficient way of encrypting images for storage and transmission. In this paper, discrete logarithms are used for generation of random keys and Number Theoretic Transform (NTT) is used as a transformation technique prior to encryption. The implementation of NTT is simple as it uses arithmetic for real sequences. Encryption and decryption involves the simple and reversible XOR operation of image pixels with the random keys based on discrete logarithms generated independently at the transmitter and receiver. Experimental results with the standard bench mark test images proposed in the USC-SIPI data base confirm the enhanced key sensitivity and strong resistivity of the algorithm against brute force attack and statistical crypt analysis. The computational complexity of the algorithm in terms of number of operations and number of rounds is very small in comparison with the other image encryption algorithms. The randomness of the keys generated has been tested and is found in accordance with the statistical test suite for security requirements of cryptographic modules as recommended by National Institute of Standards and Technology (NIST).
Keywords: computational complexity; cryptography; image processing; number theory; statistical analysis; transforms; Internet; NTT; USC-SIPI database; brute force attack; cryptographic modules; decryption; discrete logarithms; enhanced key sensitivity; fast image encryption algorithm; image pixels; number theoretic transforms; random keys generation; receiver; reversible XOR operation; secure image encryption algorithm; standard benchmark test images; statistical cryptanalysis; transmitter; Chaotic communication; Ciphers; Correlation; Encryption; Transforms; Discrete Logarithms; Image Encryption; Number Theoretic Transforms (ID#: 15-7664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091491&isnumber=7091354

 

Petcher, Adam; Morrisett, Greg, “A Mechanized Proof of Security for Searchable Symmetric Encryption,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, vol., no., pp. 481–494, 13–17 July 2015. doi:10.1109/CSF.2015.36
Abstract: We present a mechanized proof of security for an efficient Searchable Symmetric Encryption (SSE) scheme completed in the Foundational Cryptography Framework (FCF). FCF is a Coq library for reasoning about cryptographic schemes in the computational model that features a small trusted computing base and an extensible design. Through this effort, we provide the first mechanized proof of security for an efficient SSE scheme, and we demonstrate that FCF is well-suited to reasoning about such complex protocols.
Keywords: cryptographic protocols; inference mechanisms; theorem proving; trusted computing; Coq library; FCF; SSE scheme; cryptographic scheme; foundational cryptography framework; protocol; reasoning; searchable symmetric encryption; security mechanized proof; Databases; Encryption; Games; Semantics; Servers (ID#: 15-7665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243749&isnumber=7243713

 

Harikrishnan, T.; Babu, C., “Cryptanalysis of Hummingbird Algorithm with Improved Security and Throughput,” in VLSI Systems, Architecture, Technology and Applications (VLSI-SATA), 2015 International Conference on, vol., no., pp. 1–6, 8–10 Jan. 2015. doi:10.1109/VLSI-SATA.2015.7050460
Abstract: Hummingbird is a Lightweight Authenticated Cryptographic Encryption Algorithm. This light weight cryptographic algorithm is suitable for resource constrained devices like RFID tags, Smart cards and wireless sensors. The key issue of designing this cryptographic algorithm is to deal with the trade off among security, cost and performance and find an optimal cost-performance ratio. This paper is an attempt to find out an efficient hardware implementation of Hummingbird Cryptographic algorithm to get improved security and improved throughput by adding Hash functions. In this paper, we have implemented an encryption and decryption core in Spartan 3E and have compared the results with the existing lightweight cryptographic algorithms. The experimental results shows that this algorithm has higher security and throughput with improved area than the existing algorithms.
Keywords: cryptography; telecommunication security; Hash functions; RFID tags; Spartan 3E; decryption core; hummingbird algorithm cryptanalysis; hummingbird cryptographic algorithm; lightweight authenticated cryptographic encryption algorithm; optimal cost-performance ratio; resource constrained devices; security; smart cards; wireless sensors; Authentication; Ciphers; Logic gates; Protocols; Radiofrequency identification; FPGA Implementation; Lightweight Cryptography; Mutual authentication protocol; Security analysis (ID#: 15-7666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050460&isnumber=7050449


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Embedded System Security 2015

 

 
SoS Logo

Embedded System Security

2015


Embedded systems security aims for comprehensive security across hardware, platform software (including operating systems and hypervisors), software development processes, data protection protocols (both networking and storage), and cryptography. Critics say embedded device manufacturers often lack maturity when it comes to designing secure embedded systems. They say vendors in the embedded device and critical infrastructure market are starting to conduct classic threat modeling and risk analysis on their equipment, but they have not matured to the point of developing formal secure development standards. Research is beginning to bridge the gap between promise and performance, as the articles cited here, suggest. For the Science of Security, this research addresses resilience, composability, and metrics. The work cited here was published in 2015.



Yuan-Wei Tseng; Chong-Yu Liao; Tzung-Huei Hung, “An Embedded System with Realtime Surveillance Application,”
in Next-Generation Electronics (ISNE), 2015 International Symposium on, vol., no., pp. 1–4, 4–6 May 2015. doi:10.1109/ISNE.2015.7132031
Abstract: To reduce the manpower and response time for surveillance systems at low cost, in this paper, an ARM-based embedded system dedicated for unattended realtime moving target detection is constructed. The comprehensive procedures in building up an embedded system such as setup environment for cross-compilation, migration of Bootloader, migration of Linux-2.6 kernel, fabrication and migration of root document system and setup of peripheral driving devices have been presented. The algorithm of image background subtraction for moving target detection and tracking technology has been presented. In this embedded system, for two consecutive 640×480 image frames captured by camera, if the difference of 8-bit gray level value of a pixel at the same position is greater than 32, that pixel is marked as a “moving pixel”. When there are more than 500 moving pixels in two consecutive image frames, the camera is triggered to take pictures because the system assumes that a moving “invader” appears. Consequently, the system will transfer the taken invader’s pictures to “The Cloud” through WiFi to prevent the pictures being destroyed by the invader. The constructed embedded system can be used in a security system and other applications with proper modifications.
Keywords: cameras; cloud computing; embedded systems; image capture; image motion analysis; microcontrollers; object detection; object tracking; video surveillance; wireless LAN; ARM-based embedded system; Bootloader; Linux-2.6 kernel; The Cloud; WiFi; camera; consecutive image frames; cross-compilation; image background subtraction; image frame capture; moving invader; moving target detection technology; moving target tracking technology; peripheral driving devices; realtime surveillance application; root document system; surveillance systems; unattended realtime moving target detection; Cameras; Embedded systems; Kernel; Linux; Program processors; Surveillance; ARM; Lunix; embedded system; moving target detection; realtime (ID#: 15-7494)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7132031&isnumber=7131937

 

Papp, Dorottya; Zhendong Ma; Buttyan, Levente, “Embedded Systems Security: Threats, Vulnerabilities, and Attack Taxonomy,” in Privacy, Security and Trust (PST), 2015 13th Annual Conference on, vol., no., pp. 145–152, 21–23 July 2015. doi:10.1109/PST.2015.7232966
Abstract: Embedded systems are the driving force for technological development in many domains such as automotive, healthcare, and industrial control in the emerging post-PC era. As more and more computational and networked devices are integrated into all aspects of our lives in a pervasive and “invisible” way, security becomes critical for the dependability of all smart or intelligent systems built upon these embedded systems. In this paper, we conduct a systematic review of the existing threats and vulnerabilities in embedded systems based on public available data. Moreover, based on the information, we derive an attack taxonomy for embedded systems. We envision that the findings in this paper provide a valuable insight of the threat landscape facing embedded systems. The knowledge can be used for a better understanding and the identification of security risks in system analysis and design.
Keywords: embedded systems; security of data; attack taxonomy; embedded system threat; embedded system vulnerabilities; embedded systems security; security risk identification; system analysis; system design; Authentication; Cryptography; Embedded systems; Protocols; Taxonomy (ID#: 15-7495)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232966&isnumber=7232940

 

Sadeghi, A.-R.; Wachsmann, C.; Waidner, M., “Security and Privacy Challenges in Industrial Internet of Things,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2747942
Abstract: Today, embedded, mobile, and cyberphysical systems are ubiquitous and used in many applications, from industrial control systems, modern vehicles, to critical infrastructure. Current trends and initiatives, such as “Industrie 4.0” and Internet of Things (IoT), promise innovative business models and novel user experiences through strong connectivity and effective use of next generation of embedded devices. These systems generate, process, and exchange vast amounts of security-critical and privacy-sensitive data, which makes them attractive targets of attacks. Cyberattacks on IoT systems are very critical since they may cause physical damage and even threaten human lives. The complexity of these systems and the potential impact of cyberattacks bring upon new threats. This paper gives an introduction to Industrial IoT systems, the related security and privacy challenges, and an outlook on possible solutions towards a holistic security framework for Industrial IoT systems.
Keywords: Internet of Things; data privacy; embedded systems; industrial control; mobile computing; security of data; Industrie 4.0; business models; cyberattacks; cyberphysical system; embedded system; industrial Internet of Things; industrial IoT systems; industrial control systems; mobile system; privacy-sensitive data; security-critical data; user experiences; Computer architecture; Privacy; Production facilities; Production systems; Security; Software (ID#: 15-7496)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167238&isnumber=7167177

 

Yong Up Lee; Sang-Myeong Lee; Jeong-Uk Park, “Two Embedded System Design Techniques for Wireless Remote Monitoring Service,” in Digital Information, Networking, and Wireless Communications (DINWC), 2015 Third International Conference on, vol., no., pp. 121–126, 3–5 Feb. 2015. doi:10.1109/DINWC.2015.7054229
Abstract: In order to upgrade the conventional remote monitoring service, the two embedded system design and implementation methods for the wireless remote monitoring service, which provide a wireless image observation with temporary ad-hoc network, are proposed in this paper. The first method is based on the embedded system design technique for a nearly real-time wireless image observation application service and has the maximum 1 fps (frame per second) transmission rate capability per a 160×128 pixel image. The second technique uses the embedded system for an ordinary wireless long-time observation application service with the wireless image transmission rate capability of 0.33 fps.
Keywords: ad hoc networks; computerised instrumentation; embedded systems; embedded system design techniques; real-time wireless image observation application service; temporary ad-hoc network; transmission rate capability; wireless image observation; wireless image transmission rate capability; wireless long-time observation application service; wireless remote monitoring service; Ad hoc networks; Cameras; Communication system security; Remote monitoring; Wireless communication; Wireless sensor networks; ad-hoc networking; embedded system design; implementation technique; performance analysis; wireless remote monitoring (ID#: 15-7497)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054229&isnumber=7054206

 

Wang Zai-ying; Chen Liu, “Design of Mobile Phone Video Surveillance System for Home Security Based on Embedded System,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 5856–5859, 23–25 May 2015. doi:10.1109/CCDC.2015.7161856
Abstract: As the speedy development of national economy and the escalation of living standard, people’s awareness of Home Security is increasing day by day, the demand for the convenient, mobile and real-time alarm video terminal is rising rapidly. Based on the development of embedded network technology and intelligent mobile phone used widely, the design for Home Security is put forward, which consists of the embedded camera for monitoring front and the intelligent mobile phone for monitoring terminal, finally to realize video monitoring through the mobile device. The monitoring front of system: S3C2440 microprocessor is selected as the hardware core of embedded system, Linux is selected as the embedded operating system, whose function is for coding and compressing the real-time image. The system’s network part adopts China Unicom’s WCDMA Technology and the RTP/RTCP protocol of supporting transmission of streaming media, whose function is for transmitting and packing data. Intelligent mobile phone is used as the monitoring terminal, whose function is for receiving and displaying data. Through the preliminary analysis and verification, the design is reasonable and can achieve the desired requirements.
Keywords: Linux; alarm systems; code division multiple access; embedded systems; home automation; media streaming; microprocessor chips; mobile computing; mobile handsets; transport protocols; video surveillance; China Unicom; Linux RTP/RTCP protocol; S3C2440 microprocessor; WCDMA technology; embedded camera; embedded network technology; embedded operating system; embedded system; hardware core; home security; intelligent mobile phone; living standard; mobile device; mobile phone video surveillance system; monitoring front; monitoring terminal; national economy; real-time alarm video terminal; real-time image; speedy development; streaming media; video monitoring; Decoding; Kernel; Linux; Mobile handsets; Monitoring; Servers; Streaming media; Embedded; Home Security; Intelligent Mobile Phone; RTP/RTCP; S3C2440; WCDMA (ID#: 15-7498)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161856&isnumber=7161655

 

Tripathy, A.K.; Chopra, S.; Bosco, S.; Shetty, S.; Sayyed, F., “Travolution — An Embedded System in Passenger Car for Road Safety,” in Technologies for Sustainable Development (ICTSD), 2015 International Conference on, vol., no., pp. 1–6, 4–6 Feb. 2015. doi:10.1109/ICTSD.2015.7095885
Abstract: Each year, there are thousands of highway deaths and tens of thousands of serious injuries due to “Run-Off-Road” accidents. Everything from simple driver inattentiveness, to fatigue, callousness, to drunk driving, is responsible. Simple sensors can be fitted inside vehicles embedded with various features like, automatic collision notification, vehicle security, speed control which can give impetus to an efficient road safety system. The features that are proposed in this work are: Automatic collision notification that gives notification to the victim’s relative, Red light traffic control makes sure vehicle doesn’t break signal, Speed control alters speed in different zones, Horn control prevents honking in horn prohibited zone, Alcohol detection detects drunk driving and Vehicle security is used to prevent theft.
Keywords: automobiles; collision avoidance; driver information systems; road accidents; road safety; road traffic control; velocity control; alcohol detection; automatic collision notification; driver inattentiveness; drunk driving; embedded system; horn control; passenger car; red light traffic control; run-off-road accident; speed control; travolution; vehicle security; GSM; Modems; Receivers; Relays; Sensors; Switches; Vehicles; Collision Notification; Embedded System; GPS (Global Positioning System); GSM (Global System for Mobile Communication); Road safety (ID#: 15-7499)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095885&isnumber=7095833

 

Agosta, G.; Barenghi, A.; Pelosi, G.; Scandale, M., “Information Leakage Chaff: Feeding Red Herrings to Side Channel Attackers,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2744859
Abstract: A prominent threat to embedded systems security is represented by side-channel attacks: they have proven effective in breaching confidentiality, violating trust guarantees and IP protection schemes. State-of-the-art countermeasures reduce the leaked information to prevent the attacker from retrieving the secret key of the cipher. We propose an alternate defense strategy augmenting the regular information leakage with false targets, quite like chaff countermeasures against radars, hiding the correct secret key among a volley of chaff targets. This in turn feeds the attacker with a large amount of invalid keys, which can be used to trigger an alarm whenever the attack attempts a content forgery using them, thus providing a reactive security measure. We realized a LLVM compiler pass able to automatically apply the proposed countermeasure to software implementations of block ciphers. We provide effectiveness and efficiency results on an AES implementation running on an ARM Cortex-M4 showing performance overheads comparable with state-of-the-art countermeasures.
Keywords: cryptography; program compilers; trusted computing; AES implementation; ARM Cortex-M4; IP protection schemes; LLVM compiler; confidentiality breaching; content forgery; defense strategy; embedded system security; information leakage chaff; reactive security measure; side channel attackers; software implementations; trust guarantees; Ciphers; Correlation; Optimization; Software; Switches; Embedded Security; Side Channel Attacks; Software Countermeasures (ID#: 15-7500)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167217&isnumber=7167177

 

Yoshikawa, M.; Sugioka, K.; Nozaki, Y.; Asahi, K., “Secure in-Vehicle Systems Against Trojan Attacks,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 29–33, June 28 2015–July 1 2015. doi:10.1109/ICIS.2015.7166565
Abstract: Recently, driving support technologies, such as inter-vehicle and road-to-vehicle communication technologies, have been practically used. However, a problem has been pointed out that when a vehicle is connected with an external network, the safety of the vehicle is threatened. As a result, the security of vehicle control systems, which greatly affects vehicle safety, has become more important than ever. Ensuring the security of in-vehicle systems becomes an important priority, similar to ensuring conventional safety. The present study proposes a controller area network (CAN) communications method that uses a lightweight cipher to realize secure in-vehicle systems. The present study also constructs an evaluation system using a field-programmable gate array (FPGA) board and a radio-controlled car. This is used to verify the proposed method.
Keywords: controller area networks; cryptographic protocols; field programmable gate arrays; invasive software; vehicular ad hoc networks; CAN communication method; FPGA; Trojan attack; controller area network communication method; field-programmable gate array; inter-vehicle communication technology; lightweight cipher; radio-controlled car; road-to-vehicle communication technology; vehicle control system security; Authentication; Ciphers; Encryption; Radiation detectors; Safety; Vehicles; CAN communication; Embedded system; Lightweight block cipher; Security (ID#: 15-7501)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166565&isnumber=7166553

 

Bobade, S.D.; Mankar, V.R., “VLSI Architecture for an Area Efficient Elliptic Curve Cryptographic Processor for Embedded Systems,” in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, vol., no., pp. 1038–1043, 28–30 May 2015. doi:10.1109/IIC.2015.7150899
Abstract: Elliptic curve cryptography has established itself as a perfect cryptographic tool in embedded environment because of its compact key sizes and security strength at par with that of any other standard public key algorithms. Several FPGA implementations of ECC processor suited for embedded system have been consistently proposed, with a prime focus area being space and time complexities. In this paper, we have modified double point multiplication algorithm and replaced traditional Karatsuba multiplier in ECC processor with a novel modular multiplier. Designed Modular multiplier follows systolic approach of processing the words. Instead of processing vector polynomial bit by bit or in parallel, proposed multiplier recursively processes data as 16-bit words. This multiplier when employed in ECC processor reduces drastically the total area utilization. The complete modular multiplier and ECC processor module is synthesized and simulated using Xilinx 14.4 software. Experimental findings show a remarkable improvement in area efficiency, when comparing with other such architectures.
Keywords: VLSI; computational complexity; embedded systems; field programmable gate arrays; multiplying circuits; public key cryptography; ECC processor; FPGA implementations; VLSI architecture; Xilinx 14.4 software; area efficient elliptic curve cryptographic processor; cryptographic tool; double point multiplication algorithm; embedded environment; embedded system; field programmable gate array; modular multiplier; public key algorithms; security strength; space complexities; systolic approach; time complexities; total area utilization vector polynomial bit; words processing; Encryption; Integrated circuits; Latches; Elliptic Curve Cryptography; double point multiplication; finite field multiplier; public key Cryptography; security (ID#: 15-7502)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150899&isnumber=7150576

 

Raj, M.M.E.; Julian, A., “Design and Implementation of Anti-Theft ATM Machine Using Embedded Systems,” in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, vol., no., pp. 1–5, 19–20 March 2015. doi:10.1109/ICCPCT.2015.7159316
Abstract: Automated Teller Machines (ATMs) security is the field of study that aims at solutions that provide multiple points of protection against physical and electronic theft from ATMs and protecting their installations. From anti-skimming defend systems to silent indicate systems, integrated ATM video surveillance cameras and ATM monitoring options, security specialists are ready to help the people get more out of the ATM security and ATM loss prevention systems. The implementation is achieved with the use of Machine-to-machine (M2M) communications technology. M2M communications is a topic that has recently attracted much attention It provides real-time monitoring and control without the need for human intervention. The idea of M2M platform suggests new system architecture for positioning and monitoring applications with wider coverage and higher communication efficiency. The aim of the proposed work is to implement a low cost stand-alone Embedded Web Server (EWS) based on ARM11 processor and Linux operating system using Raspberry Pi. It offers a robust networking solution with wide range of application areas over internet. The Web server can be run on an embedded system having limited resources to serve embedded web page to a web browser. The setup is proposed for ATM security, comprising of the modules namely, authentication of shutter lock, web enabled control, sensors and camera control.
Keywords: Web sites; automatic teller machines; computer crime; computerised monitoring; data protection; embedded systems; message authentication; software architecture; video cameras; video surveillance; ARM11 processor; ATM loss prevention systems; ATM monitoring; ATM security; ATM video surveillance cameras; EWS; Linux operating system; M2M communications technology; M2M platform; Raspberry Pi; Web browser; Web enabled control; anti-skimming defend systems; anti-theft ATM machine; automated teller machines security; camera control; communication efficiency; electronic theft; embedded Web page; embedded Web server; installations protection; machine-to-machine communications technology; monitoring applications; physical theft; positioning applications; real-time monitoring; sensors; shutter lock authentication; silent indicate systems; system architecture; Computers; Monitoring; Online banking; Radio frequency; Radiofrequency identification; Security; Web servers; Embedded System; M2M; RF Communication; Web Server (ID#: 15-7503)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159316&isnumber=7159156

 

Shinde, A.S.; Bendre, V., “An Embedded Fingerprint Authentication System,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 205–208, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.45
Abstract: Fingerprint authentication is one of the most reliable and widely used personal identification method. However, manual fingerprint authentication is tedious, inaccurate, time-consuming and costly that it is not capable of meeting today’s increasing performance necessities. An automatic fingerprint authentication system (AFAS) is widely needed. It plays a very essential role in forensic and civilian applications such as criminal identification, access control, and ATM card verification. This paper describes the design and implementation of an Embedded Fingerprint Authentication system which operates in two stages: minutia extraction and minutia matching. The present technological era is demanding reliable and cost-effective personal authentication systems for large number of daily use applications where security and privacy performance of the information is required. Biometrics authentication techniques in combination with embedded systems technologies give a demanding solution to this need. This paper explains the hardware-software co-design responsible for matching two fingerprint minutiae sets and suggests the use of reconfigurable architectures for Automatic Fingerprint Authentication System. Moreover, this paper explains the implementation of a fingerprint algorithm using a Spartan-6 FPGA, as an appropriate portable and low cost device. The experimental results show that system meets the response time requirements of Automatic Fingerprint Authentication System with high speed using hardware-software co-design.
Keywords: data privacy; digital forensics; embedded systems; field programmable gate arrays; hardware-software codesign; message authentication; AFAS; ATM card verification; Spartan-6 FPGA; access control; and applications; automatic fingerprint authentication system; biometrics authentication techniques; criminal identification; daily use applications; embedded system; field programmable gate array; fingerprint minutiae sets; forensic applications; hardware-software codesign; manual fingerprint authentication; minutia extraction; minutia matching; personal identification method; privacy performance; reconfigurable architectures; response time requirements; security performance; Authentication; Coprocessors; Databases; Field programmable gate arrays; Fingerprint recognition; Hardware; Portable computers; Biometrics; Embedded system; Reconfigurable; fingerprint; matching; minutia (ID#: 15-7504)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155835&isnumber=7155781

 

Jaiswal, A.S.; Baporikar, V., “Embedded Wireless Data Acquisition System for Unmanned Vehicle in Underwater Environment,” in Underwater Technology (UT), 2015 IEEE, vol., no., pp. 1–6, 23–25 Feb. 2015. doi:10.1109/UT.2015.7108223
Abstract: Underwater robots can record data that is difficult for humans to gather. In recent years, robotic underwater vehicles have become useful for variety of industrial and civil sectors in exploring the water bodies. They are used extensively by the scientific community to study the ocean, fresh water & underwater environment. ZigBee is an efficient & effective wireless network standard for wireless control and monitoring applications. It is an alternate technology that has changed connectivity between the communicating systems. The objective of this model is to design a wireless underwater robot for security purpose and better understand water and its environment with electronics, motion control and sensor system. This paper will present an implemented model of Embedded Wireless Data Acquisition system using ZigBee which will be controlled using the PIC microcontroller which will be programmed using embedded C language. The wireless rotating camera will capture the images & video. Sonar, depth, temperature sensors will acquire data and transmit to the user computer using Zigbee. The DC motor is used for the movement of the robot & controlled wirelessly by user. In our implementation the PIC acts as the Central Data Acquisition System which is controlling system and acquires the data from different subsystems of an unmanned underwater vehicle. This new method of implementation of ZigBee as a medium for data acquisition system will be useful for cleaning, monitoring, understanding the clean and unclean underwater environment.
Keywords: Zigbee; data acquisition; geophysical equipment; geophysical techniques; remotely operated vehicles; wireless sensor networks; Central Data Acquisition System; PIC microcontroller; Zigbee data acquisition; civil sectors; clean underwater environment; communicating systems; depth sensor; embedded C language; embedded wireless data acquisition system; fresh water; industrial sectors; motion control; record data; robotic underwater vehicles; scientific community; sensor system; sonar sensor; temperature sensor; underwater environment; underwater robots; unmanned underwater vehicle; water bodies; wireless control application; wireless monitoring application; wireless network standard; wireless rotating camera; wireless underwater robot; Acoustics; Communication system security; DC motors; Monitoring; Process control; Rivers; Wireless communication; Embedded system PIC; Zigbee data acquisition; ZigBee; wireless network (ID#: 15-7505)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7108223&isnumber=7108213

 

Khandal, D.; Somwanshi, D., “A Novel Cost Effective Access Control and Auto Filling Form System Using QR Code,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1–5, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275575
Abstract: QR codes are used to store information in two dimensional grids which can be decoded quickly. The proposed work here deals with Quick response (QR) code extending its encoding and decoding implementation to design a new articulated user authentication and access control mechanism. The work also proposes a new simultaneous registration system for offices and organizations. The proposed system retrieves the candidate’s information from their QR identification code and transfers the data to the digital application form, along with granting authentication to authorized QR image from the database. The system can improve the quality of service and thus it can increase the productivity of any organization.
Keywords: QR codes; authorisation; cryptography; decoding; image coding; information retrieval; information storage; quality of service; QR identification code; articulated user authentication design; authorized QR image; auto filling form system; candidate information retrieval; cost effective access control system; data transfer; decoding implementation; digital application form; encoding implementation; information storage; offices; organizations; quality of service improvement; quick response code; registration system; two-dimensional grid; Decoding; Handwriting recognition; IEC; ISO; Image recognition; Magnetic resonance imaging; Monitoring; Authentication; Automated filling form; Code Reader; Embedded system; Encoding-Decoding; Proteus; Security
(ID#: 15-7506)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275575&isnumber=7275573

 

Strobel, D.; Bache, F.; Oswald, D.; Schellenberg, F.; Paar, C., “SCANDALee: A Side-ChANnel-based DisAssembLer Using Local Electromagnetic Emanations,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 139–144, 9–13 March 2015. doi: (not provided)
Abstract: Side-channel analysis has become a well-established topic in the scientific community and industry over the last one and a half decade. Somewhat surprisingly, the vast majority of work on side-channel analysis has been restricted to the “use case” of attacking cryptographic implementations through the recovery of keys. In this contribution, we show how side-channel analysis can be used for extracting code from embedded systems based on a CPU’s electromagnetic emanation. There are many applications within and outside the security community where this is desirable. In cryptography, it can, e.g., be used for recovering proprietary ciphers and security protocols. Another broad application field is general security and reverse engineering, e.g., for detecting IP violations of firmware or for debugging embedded systems when there is no debug interface or it is proprietary. A core feature of our approach is that we take localized electromagnetic measurements that are spatially distributed over the IC being analyzed. Given these multiple inputs, we model code extraction as a classification problem that we solve with supervised learning algorithms. We apply a variant of linear discriminant analysis to distinguish between the multiple classes. In contrast to previous approaches, which reported instruction recognition rates between 40-70%, our approach detects more than 95% of all instructions for test code, and close to 90% for real-world code. The methods are thus very relevant for use in practice. Our method performs dynamic code recognition, which has both advantages (only the program parts that are actually executed are observed) but also limitations (rare code executions are difficult to observe).
Keywords: cryptographic protocols; firmware; learning (artificial intelligence); program debugging; reverse engineering; SCANDALee; classification problem; cryptography; dynamic code recognition; embedded system debugging; firmware IP violation detection; general security; linear discriminant analysis; local electromagnetic emanations; localized electromagnetic measurements; proprietary ciphers; security protocols; side-channel analysis; side-channel-based disassembler; supervised learning algorithm; Algorithm design and analysis; Clocks; Feature extraction; Position measurement; Probes; Reverse engineering; Security (ID#: 15-7507)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092372&isnumber=7092347

 

Rivière, L.; Bringer, J.; Thanh-Ha Le; Chabanne, H., “A Novel Simulation Approach for Fault Injection Resistance Evaluation on Smart Cards,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–8, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107460
Abstract: Physical perturbations are performed against embedded systems that can contain valuable data. Such devices and in particular smart cards are targeted because potential attackers hold them. The embedded system security must hold against intentional hardware failures that can result in software errors. In a malicious purpose, an attacker could exploit such errors to find out secret data or disrupt a transaction. Simulation techniques help to point out fault injection vulnerabilities and come at an early stage in the development process. This paper proposes a generic fault injection simulation tool that has the particularity to embed the injection mechanism into the smart card source code. By its embedded nature, the Embedded Fault Simulator (EFS) allows us to perform fault injection simulations and side-channel analyses simultaneously. It makes it possible to achieve combined attacks, multiple fault attacks and to perform backward analyses. We appraise our approach on real, modern and complex smart card systems under data and control flow fault models. We illustrate the EFS capacities by performing a practical combined attack on an Advanced Encryption Standard (AES) implementation.
Keywords: cryptography; fault simulation; smart cards; AES; EFS; advanced encryption standard; backward analyses; complex smart card systems; control flow fault models; embedded fault simulator; fault injection resistance evaluation; fault injection simulations; generic fault injection simulation tool; multiple fault attacks; side-channel analyses; smart card source code; Data models; Hardware; Object oriented modeling; Registers; Security; Smart cards; Software; Fault injection; Physical attack; combined attack; data modification; embedded systems; instruction skip; side-channel attack; smart card (ID#: 15-7508)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107460&isnumber=7107396

 

Ambrose, J.A.; Ragel, R.G.; Jayasinghe, D.; Tuo Li; Parameswaran, S., “Side Channel Attacks in Embedded Systems: A Tale of Hostilities and Deterrence,” in Quality Electronic Design (ISQED), 2015 16th International Symposium on, vol., no.,
pp. 452–459, 2–4 March 2015. doi:10.1109/ISQED.2015.7085468
Abstract: Security of embedded computing systems is becoming paramount as these devices become more ubiquitous, contain personal information and are increasingly used for financial transactions. Side Channel Attacks, in particular, have been effective in obtaining secret keys which protect information. In this paper we selectively classify the side channel attacks, and selectively demonstrate a few attacks. We further classify the popular countermeasures to Side Channel Attacks. The paper paints an overall picture for a researcher or a practitioner who seeks to understand or begin to work in the area of side channel attacks in embedded systems.
Keywords: embedded systems; security of data; embedded computing system; embedded system; financial transaction; personal information; security; side channel attack; Algorithm design and analysis; Correlation; Embedded systems; Encryption; Power demand; Timing (ID#: 15-7509)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085468&isnumber=7085355

 

Ghosh, S.; Das, S.J.; Paul, R.; Chakrabarti, A., “Multicore Encryption and Authentication on a Reconfigurable Hardware,” in Recent Trends in Information Systems (ReTIS), 2015 IEEE 2nd International Conference on, vol., no., pp. 173–177, 9–11 July 2015. doi:10.1109/ReTIS.2015.7232873
Abstract: Security has always been the toughest challenge in data communication, at the same time it is the biggest necessity in transmitting confidential data. Sensitive data are often at stake when they are deployed in a network. Embedded system design is a very popular research activity as it has a wide range of applications namely, security and surveillance, personal digital assistant, biomedical systems, mobile and pervasive communication gadgets, along with its huge speed compared to very popular software designs. Most of the embedded system applications involve data communication between multiple parties. To add to it, sensor technology requires physically secured systems, which can be dealt with cryptographic and hashing algorithms. However, a parallel implementation of Encryption and Hashing algorithm will cost the efficiency and performance speed of the system. To overcome the shortcomings a multi-core system, capable of parallely executing authentication and encryption is proposed. In this proposal a encryption algorithm and a hash algorithm are placed into two ARM cortex processor of ZYNQ 7020-clg484 FPGA board using ISE 14.4 design suite. The true parallel execution of both algorithms increases system throughput. The soft core IPs(RS232 and Ethernet) are placed in FPGA region to handle realtime data.
Keywords: cryptography; data communication; data privacy; field programmable gate arrays; message authentication; parallel processing; ARM cortex processor; Ethernet; ISE 14.4 design suite; RS232; ZYNQ 7020-clg484 FPGA board; confidential data transmission; cryptographic algorithm; embedded system applications; embedded system design; hashing algorithm; multicore authentication; multicore encryption; parallel implementation; physically secured systems; reconfigurable hardware; security; sensor technology; soft core IPs; Algorithm design and analysis; Authentication; Encryption; Field programmable gate arrays; Hardware; Throughput (ID#: 15-7510)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232873&isnumber=7232836

 

Davi, L.; Hanreich, M.; Paul, D.; Sadeghi, A.-R.; Koeberl, P.; Sullivan, D.; Arias, O.; Jin, Y., “HAFIX: Hardware-Assisted Flow Integrity eXtension,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2744847
Abstract: Code-reuse attacks like return-oriented programming (ROP) pose a severe threat to modern software on diverse processor architectures. Designing practical and secure defenses against code-reuse attacks is highly challenging and currently subject to intense research. However, no secure and practical system-level solutions exist so far, since a large number of proposed defenses have been successfully bypassed. To tackle this attack, we present HAFIX (Hardware-Assisted Flow Integrity Extension), a defense against code-reuse attacks exploiting backward edges (returns). HAFIX provides fine-grained and practical protection, and serves as an enabling technology for future control-flow integrity instantiations. This paper presents the implementation and evaluation of HAFIX for the Intel® Siskiyou Peak and SPARC embedded system architectures, and demonstrates its security and efficiency in code-reuse protection while incurring only 2% performance overhead.
Keywords: data protection; software reusability; HAFIX; Intel Siskiyou Peak; ROP; SPARC embedded system architectures; backward edges; code-reuse attacks; code-reuse protection; control-flow integrity instantiations; hardware-assisted flow integrity extension; processor architectures; return-oriented programming; Benchmark testing; Computer architecture; Hardware; Pipelines; Program processors; Random access memory; Registers (ID#: 15-7511)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167258&isnumber=7167177

 

Sixing Lu; Minjun Seo; Lysecky, R., “Timing-Based Anomaly Detection in Embedded Systems,” in Design Automation Conference (ASP-DAC), 2015 20th Asia and South Pacific, vol., no., pp. 809–814, 19–22 Jan. 2015. doi:10.1109/ASPDAC.2015.7059110
Abstract: Recent research has demonstrated that many systems are vulnerable to numerous types of malicious activity. As the pervasiveness of embedded systems with network connectivity continues to increase, embedded systems security has become a critical challenge. However, most existing techniques for detecting malware utilize software-based methods that incur significant performance overheads that are often not feasible in embedded systems. In this paper, we present an overview of a novel method for non-intrusively detecting malware in embedded system. The proposed technique utilizes timing requirements to improve detection performance and provide increased resilience to mimicry attacks.
Keywords: embedded systems; invasive software; object detection; timing circuits; detection performance improvement; embedded system security; malicious activity; mimicry attacks; network connectivity; nonintrusively detecting malware; performance overheads; software-based methods; timing-based anomaly detection; Embedded systems; Hardware; Malware; Monitoring; Runtime; Timing (ID#: 15-7512)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7059110&isnumber=7058915

 

Lu, Zhaojun; Pei, Gen; Liu, Bojun; Liu, Zhenglin, “Hardware Implementation of Negative Selection Algorithm for Malware Detection,” in Electron Devices and Solid-State Circuits (EDSSC), 2015 IEEE International Conference on, vol., no., pp. 301–304, 1–4 June 2015. doi:10.1109/EDSSC.2015.7285110
Abstract: It has been an important issue needing solved in the information security field to detect malware[4][5]. Negative selection algorithm as one of the core algorithm of artificial immune system, can be applied to detect malware. Negative selection algorithm based on binary coding is one of the most basic and important detecting model. But the application of negative selection algorithm mainly exist in the software and network systems, there is not a ready-made approach to apply negative selection algorithm to detect malicious attacks for embedded system at present. This paper focuses in proposing an approach to add a hardware immune mechanism to the embedded processor to defense malicious attacks and improving the traditional negative selection algorithm so that we can actually apply the algorithm in malware detection for embedded system in further work.
Keywords: Conferences; Electron devices; Solid state circuits; AIS; Detection; Hardware Immune Mechanism; Malware; NSA
(ID#: 15-7513)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285110&isnumber=7285012

 

Sujitha, R.; Devipriya, A., “Automatic Identification of Accidents and to Improve Notification Using Emerging Technologies,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–4, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292412
Abstract: New communication technologies integrated into modern vehicles offer a better assistance to people injured in traffic accidents. Recent studies show how hybrid communication capabilities should be supported and improve overall rescue process. There are a variety of areas, where in a need exists for a system capable of identifying and characterize the severity of the accidents using KDD process. In this system considers the most relevant variables that can be characterize the severity of the accidents (variables such as vehicle speed, vehicle location, accelerometer condition) by using embedded systems. This system consists of several wireless network devices such as Global Positioning System (GPS) and ZigBee. GPS determine the location of the vehicle. Proposed system contains single-board embedded system that is equipped with GPS and ZigBee, along with microcontroller that is installed in the OBU vehicle. Based on vehicle motion, report is generated and to be taken by emergency services. If small accident has occurred or if there is no serious danger to anyone’s life, then there is the option for alert message can be terminated by the driver or any other near peoples by a switch in order to avoid sends the message to control and save the valuable time of the medical rescue team. To improve the overall rescue process, a fast and accurate estimation of the severity of the accident system offered perfect facts to emergency services as soon as possible and saves precious life of peoples.
Keywords: Accelerometers; Accidents; Databases; Emergency services; Global Positioning System; Servers; Vehicles; GPS; OBU; VANET; ZigBee (ID#: 15-7514)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292412&isnumber=7292366

 

Balasundaram, Anuradha; Chenniappan, Vivekanandan, “Optimal Code Layout for Reducing Energy Consumption in Embedded Systems,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–5,
25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292406
Abstract: Most of the microprocessor spends majority of its time waiting for the data to be transferred from slow memory devices connected to it, resulting in Memory wall problem. The main aim of this paper is to reduce memory wall problem by increasing not only processor speed but also the memory speed. This can be achieved by placing efficient and small memory near the processor so that energy efficiency of the system can be improved. Such memory is called Scratch Pad memory (SPM). Scratch pad memory (SPM) and cache memory plays a vital role in improving the efficiency of the system. Repositioning of code in on-chip and off-chip memory increases the efficient of the utilization of multiprocessing embedded system. Optimal code layout design is developed to place the code in memory for preventing the cache conflicts and misses. Many researchers discussed about the usage of SPM and cache memory to improve the efficiency of the system but combining the both is not done. In this work both SPM and cache memory is combined with the proposed Meta heuristics technique. Meta heuristic model is the proposed model in which along with the SPM, Cache code layout is developed to place the code in it, resulting better performance compared with the other two models namely ILP model and Heuristic model. It is found that the two stage meta-heuristic model yield more efficiency and consume less energy than other two models.
Keywords: Cache memory; Embedded systems; Energy consumption; Layout; Memory management; Random access memory; System-on-chip; Heuristic; ILP; Memory wall problem; Meta heuristics; Scratch Pad Memory (ID#: 15-7515)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292406&isnumber=7292366
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Expert Systems Security 2015

 

 
SoS Logo

Expert Systems Security

2015


Expert systems have the potential for efficiency, scalability, and economy in systems security. The research work cited here looks at a range of systems including SCADA, Internet of Things, and other cyber physical systems. The works address scalability, resilience, and measurement.



Rani, C.; Goel, S., “CSAAES: An Expert System for Cyber Security Attack Awareness,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 242–245, 15–16 May 2015. doi:10.1109/CCAA.2015.7148381
Abstract: Internet today is used by almost all the people, organizations etc. With this vast usage of internet, a lot of information is exposed on the internet. This information is available to the hackers. So a lot of attacks occur in the computer systems through internet. These attacks may destroy the information present on a particular system or use the system to perform other type of attacks. We need to provide protection from these attacks. User faces some problem in the functioning of the computer but has no means of identifying and solving the problems. Knowledge about different type of attacks and their effects on the system is available from various sources. The handling of various attacks is also available, but the way to identify which attack is being performed on the computer system is difficult. The expert system designed here can identify which type of attack is being performed on the system, their symptoms and ways to solve these attacks i.e. countermeasures. It is a platform for cyber attacks security awareness among internet users.
Keywords: Internet; expert systems; security of data; CSAAES expert system; Internet; cyber security attack awareness; information exposure; Computer crime; Computers; Expert systems; Internet; Software; attacks; countermeasures; expert system; security; security framework (ID#: 15-7382)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148381&isnumber=7148334

 

Yost, J., “The March of IDES: A History of the Intrusion Detection Expert System,” in IEEE Annals of the History of Computing, vol. PP, no. 99, pp. 1–1, 13 July 2015. doi:10.1109/MAHC.2015.41
Abstract: This paper examines the pre-history and history of early intrusion detection expert systems by focusing the first such system, Intrusion Detection Expert System, or IDES, which was developed in the second half of the 1980s at SRI International (and SRI’s follow-on Next Generation Intrusion Detection Expert System, or NIDES, in the early-to-mid 1990s). It also presents and briefly analyzes the outsized contribution of women scientists to leadership of this area of computer security research and development, contrasting it with the largely male-led work on “high-assurance” operating system design, development, and standard-setting.
Keywords: Communities; Computer security; Computers; Expert systems; History; Intrusion detection (ID#: 15-7383)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155454&isnumber=5255174

 

Neelam, Sahil; Sood, Sandeep; Mehmi, Sandeep; Dogra, Shikha., “Artificial Intelligence for Designing User Profiling System for Cloud Computing Security: Experiment,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 51–58, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164645
Abstract: In Cloud Computing security, the existing mechanisms: Anti-virus programs, Authentications, Firewalls are not able to withstand the dynamic nature of threats. So, User Profiling System, which registers user’s activities to analyze user’s behavior, augments the security system to work in proactive and reactive manner and provides an enhanced security. This paper focuses on designing a User Profiling System for Cloud environment using Artificial Intelligence techniques and studies behavior (of User Profiling System) and proposes a new hybrid approach, which will deliver a comprehensive User Profiling System for Cloud Computing security.
Keywords: artificial intelligence; authorisation; cloud computing; firewalls; antivirus programs; artificial intelligence techniques; authentications; cloud computing security; cloud environment; proactive manner; reactive manner; user activities; user behavior; user profiling system; Artificial intelligence; Cloud computing; Computational modeling; Fuzzy logic; Fuzzy systems; Genetic algorithms; Security; Artificial Intelligence; Artificial Neural Networks; Cloud Computing; Datacenters; Expert Systems; Genetics; Machine Learning; Multi-tenancy; Networking Systems; Pay-as-you-go Model (ID#: 15-7384)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164645&isnumber=7164643

 

Li Zeng Xin; Rong Xin Yan, “Accounting Information System Risk Assessment Algorithm Based on Analytic Hierarchy Process,” in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, vol., no., pp. 72–75, 13–14 June 2015. doi:10.1109/ICMTMA.2015.25
Abstract: So far, there is little research on accounting information system risk assessment in our country. In order to provide the basis to meet the security needs of the accounting information system and reduce the risk of accounting information system security, reduce financial loses and improve the work efficiency, a model of enterprise accounting information system risk assessment method based on Analytic Hierarchy Process is proposed. The analytic hierarchy process model is applied to one corporate accounting information system for risk assessment. It can be concluded that the proposed method get better result of risk assessment, and have strong operability and effectiveness of risk assessment in the accounting information system for enterprise.
Keywords: accounting; analytic hierarchy process; information systems; risk management; security of data; accounting information system security needs; analytic hierarchy process model; corporate accounting information system; enterprise accounting information system risk assessment method; financial loss; work efficiency; Analytic hierarchy process; Expert systems; Indexes; Risk management; Security; Software; Analytic Hierarchy Process; accounting information system; risk assessment (ID#: 15-7385)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263517&isnumber=7263490

 

Halim, Shamimi A.; Annamalai, Muthukkaruppan; Ahmad, Mohd Sharifuddin; Ahmad, Rashidi, “Domain Expert Maintainable Inference Knowledge of Assessment Task,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292974
Abstract: Inference and domain knowledge are the foundation of a Knowledge-based System (KBS). Inference knowledge describes the steps or rules used to perform a task inference; making reference to the domain knowledge that is used. The inference knowledge is typically acquired from the domain experts and communicated to the system developers to be implemented in a KBS. The explicit representation of inference knowledge eases the maintenance of the evolving knowledge. However, the involvements of the knowledge engineers and software developers during the maintenance phase give cause to several problems during the system’s life-cycle. In this paper, we provide a possible way of using rule templates to abstract away the inference knowledge to higher conceptual categories that are amenable to domain experts. Backed by a rule editing user-interface that is designed to instantiate the rule templates, the responsibility to maintain the inference knowledge can be assigned to the domain experts, i.e., the originators of the knowledge. The paper demonstrates the feasibility of the idea by making a case of inference knowledge applied to assessment task such as triage decision making. Five rule templates to represent the inference knowledge of assessment tasks are proposed. We validated the rule templates through case studies in several domains and task, as well as through usability testing.
Keywords: Biological system modeling; Decision making; Expert systems; Knowledge engineering; Maintenance engineering; Medical services (ID#: 15-7386)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292974&isnumber=7292885

 

Rummukainen, L.; Oksama, L.; Timonen, J.; Vankka, J., “Situation Awareness Requirements for a Critical Infrastructure Monitoring Operator,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225326
Abstract: This paper presents a set of situation awareness (SA) requirements for an operator who monitors critical infrastructure (CI). The requirements consist of pieces of information that the operator needs in order to be successful in their work. The purpose of this research was to define a common requirement base that can be used when designing a CI monitoring system or a user interface to support SA. The requirements can also be used during system or user interface evaluation, and as a guide for what aspects to emphasize when training new CI monitoring operators. To create the SA requirements, goal-directed task analysis (GDTA) was conducted. For GDTA, nine interview sessions were held during the research. For a clear understanding of a CI monitoring operator’s work, all interviewees were subject matter experts (SMEs) and had extensive experience in CI monitoring. Before the interviews, a day-long observation session was conducted to gather initial input for the GDTA goal hierarchy and the SA requirements. GDTA identified three goals an operator must achieve in order to be successful in their work, and they were used to define the final SA requirements. As a result, a hierarchy diagram was constructed that includes three goals: monitoring, analysis and internal communication, and external communication. The SA requirements for a CI monitoring operator include information regarding ongoing incidents in the environment and the state of systems and services in the operator’s organization.
Keywords: critical infrastructures; expert systems; task analysis; user interfaces; CI monitoring operator; CI monitoring system; GDTA goal hierarchy; critical infrastructure monitoring operator; goal-directed task analysis; hierarchy diagram; observation session; requirement base; situation awareness requirement; subject matter expert; user interface evaluation; Context; Industries; Interviews; Monitoring; Organizations; Security; User interfaces (ID#: 15-7387)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225326&isnumber=7190491

 

Esmaily, Jamal; Moradinezhad, Reza; Ghasemi, Jamal, “Intrusion Detection System Based on Multi-Layer Perceptron Neural Networks and Decision Tree,” in Information and Knowledge Technology (IKT), 2015 7th Conference on, vol., no., pp. 1–5, 26–28 May 2015. doi:10.1109/IKT.2015.7288736
Abstract: The growth of internet attacks is a major problem for today’s computer networks. Hence, implementing security methods to prevent such attacks is crucial for any computer network. With the help of Machine Learning and Data Mining techniques, Intrusion Detection Systems (IDS) are able to diagnose attacks and system anomalies more effectively. Though, most of the studied methods in this field, including Rule-based expert systems, are not able to successfully identify the attacks which have different patterns from expected ones. By using Artificial Neural Networks (ANNs), it is possible to identify the attacks and classify the data, even when the dataset is nonlinear, limited, or incomplete. In this paper, a method based on the combination of Decision Tree (DT) algorithm and Multi-Layer Perceptron (MLP) ANN is proposed which is able to identify attacks with high accuracy and reliability.
Keywords: Algorithm design and analysis; Classification algorithms; Clustering algorithms; Decision trees; Intrusion detection; Neural networks; Support vector machines; Decision Tree; Intrusion Detection Systems; Machine Learning; Neural Networks (ID#: 15-7388)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288736&isnumber=7288662

 

Desnitsky, V.A.; Kotenko, I.V.; Nogin, S.B., “Detection of Anomalies in Data for Monitoring of Security Components in the Internet of Things,” in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, vol., no., pp. 189–192, 19–21 May 2015. doi:10.1109/SCM.2015.7190452
Abstract: The increasing urgency and expansion of information systems implementing the Internet of Things (IoT) concept determine the importance of the investigation in the field of protection mechanisms against a wide range of information security threats. The increased complexity of such investigation is determined by a low structuring and formalization of expert knowledge on IoT systems. The paper encompasses an approach to elicitation and use of expert knowledge on detection of anomalies in data as well as their usage as an input for automated means aimed at monitoring security components of IoT.
Keywords: Internet of Things; information systems; monitoring; security of data;  IoT concept; data anomalies; information security threats; information systems; security components; Intelligent sensors; Monitoring; Security; Sensor systems; Software; Temperature sensors; IoT system testing; anomaly detection; expert knowledge; information security; internet of things (ID#: 15-7389)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190452&isnumber=7190390

 

Baker, T.; Mackay, M.; Shaheed, A.; Aldawsari, B., “Security-Oriented Cloud Platform for SOA-Based SCADA,” in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, vol., no., pp. 961–970, 4–7 May 2015. doi:10.1109/CCGrid.2015.37
Abstract: During the last 10 years, experts in critical infrastructure security have been increasingly directing their focus and attention to the security of control structures such as Supervisory Control and Data Acquisition (SCADA) systems in the light of the move toward Internet-connected architectures. However, this more open architecture has resulted in an increasing level of risk being faced by these systems, especially as they became offered as services and utilised via Service Oriented Architectures (SOA). For example, the SOA-based SCADA architecture proposed by the AESOP project concentrated on facilitating the integration of SCADA systems with distributed services on the application layer of a cloud network. However, whilst each service specified various security goals, such as authorisation and authentication, the current AESOP model does not attempt to encompass all the necessary security requirements and features of the integrated services. This paper presents a concept for an innovative integrated cloud platform to reinforce the integrity and security of SOA-based SCADA systems that will apply in the context of Critical Infrastructures to identify the core requirements, components and features of these types of system. The paper uses the SmartGrid to highlight the applicability and importance of the proposed platform in a real world scenario.
Keywords: SCADA systems; cloud computing; critical infrastructures; distributed processing; security of data; service-oriented architecture; SCADA; SOA; cloud network; critical infrastructure security; distributed service; security-oriented cloud platform; service oriented architecture; supervisory control and data acquisition; Authorization; Cloud computing; Computer architecture; Monitoring; SCADA Service-oriented architecture; Cloud Computing; Critical Infrastructure; (ID#: 15-7390)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152582&isnumber=7152455

 

Yamaguchi, F.; Maier, A.; Gascon, H.; Rieck, K., “Automatic Inference of Search Patterns for Taint-Style Vulnerabilities,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 797–812, 17–21 May 2015. doi:10.1109/SP.2015.54
Abstract: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered “Heart bleed” vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
Keywords: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project-specific API; search pattern; security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; Security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities (ID#: 15-7391)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163061&isnumber=7163005

 

Younis, A.A.; Malaiya, Y.K., “Comparing and Evaluating CVSS Base Metrics and Microsoft Rating System,” in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, vol., no., pp. 252–261, 3–5 Aug. 2015. doi:10.1109/QRS.2015.44
Abstract: Evaluating the accuracy of vulnerability security risk metrics is important because incorrectly assessing a vulnerability to be more critical could lead to a waste of limited resources available and ignoring a vulnerability incorrectly assessed as not critical could lead to a breach with a high impact. In this paper, we compare and evaluate the performance of the CVSS Base metrics and Microsoft Rating system. The CVSS Base metrics are the de facto standard that is currently used to measure the severity of individual vulnerabilities. The Microsoft Rating system developed by Microsoft has been used for some of the most widely used systems. Microsoft software vulnerabilities have been assessed by both the Microsoft metrics and the CVSS Base metrics which makes their comparison feasible. The two approaches, the technical analysis approach (Microsoft) and the expert opinions approach (CVSS) differ significantly. To conduct this study, we examine 813 vulnerabilities of Internet Explorer and Windows 7. The two software systems have been selected because they have a rich history of publicly available vulnerabilities, and they differ significantly in functionality and size. The presence of actual exploits is used for evaluating them. The results show that exploitability metrics in either system do not correlate strongly with the existence of exploits, and have a high false positive rate.
Keywords: Internet; security of data; software metrics; CVSS base metrics; Internet Explorer; Microsoft rating system; Microsoft software vulnerabilities; Windows 7; expert opinions approach; exploitability metrics; publicly available vulnerabilities; technical analysis approach; vulnerability security risk metrics; Browsers; Indexes; Internet; Measurement; Security; Software; CVSS Base Metrics; Empirical Software Engineering; Exploits; Microsoft Exploitability Index; Microsoft Rating System; Risk assessment; Severity; Software Vulnerability (ID#: 15-7392)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272940&isnumber=7272893

 

Afzal, Z.; Lindskog, S., “Automated Testing of IDS Rules,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords: program testing; security of data; IDS rules; automated testing; intrusion detection systems; Conferences; Generators; Intrusion detection; Payloads; Protocols; Servers; Testing (ID#: 15-7393)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107461&isnumber=7107396

 

Enache, A.-C.; Ioniţă, M.; Sgârciu, V., “An Immune Intelligent Approach for Security Assurance,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1–5, 8–9 June 2015. doi:10.1109/CyberSA.2015.7166116
Abstract: Information Security Assurance implies ensuring the integrity, confidentiality and availability of critical assets for an organization. The large amount of events to monitor in a fluid system in terms of topology and variety of new hardware or software, overwhelms monitoring controls. Furthermore, the multi-facets of cyber threats today makes it difficult even for security experts to handle and keep up-to-date. Hence, automatic “intelligent” tools are needed to address these issues. In this paper, we describe a ‘work in progress’ contribution on intelligent based approach to mitigating security threats. The main contribution of this work is an anomaly based IDS model with active response that combines artificial immune systems and swarm intelligence with the SVM classifier. Test results for the NSL-KDD dataset prove the proposed approach can outperform the standard classifier in terms of attack detection rate and false alarm rate, while reducing the number of features in the dataset.
Keywords: artificial immune systems; pattern classification; security of data; support vector machines; NSL-KDD dataset; SVM classifier; anomaly based IDS model; artificial immune system; asset availability; asset confidentiality; asset integrity; attack detection rate; cyber threats; false alarm rate; immune intelligent approach; information security assurance; intrusion detection system; security threats mitigation; swarm intelligence; Feature extraction; Immune system; Intrusion detection; Particle swarm optimization; Silicon; Support vector machines; Binary Bat Algorithm; Dendritic Cell Algorithm; IDS; SVM (ID#: 15-7394)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166116&isnumber=7166109

 

Sundarkumar, G. Ganesh; Ravi, Vadlamani; Nwogu, Ifeoma; Govindaraju, Venu, “Malware Detection via API Calls, Topic Models and Machine Learning,” in Automation Science and Engineering (CASE), 2015 IEEE International Conference on, vol., no., pp. 1212–1217, 24–28 Aug. 2015. doi:10.1109/CoASE.2015.7294263
Abstract: Dissemination of malicious code, also known as malware, poses severe challenges to cyber security. Malware authors embed software in seemingly innocuous executables, unknown to a user. The malware subsequently interacts with security-critical OS resources on the host system or network, in order to destroy their information or to gather sensitive information such as passwords and credit card numbers. Malware authors typically use Application Programming Interface (API) calls to perpetrate these crimes. We present a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences. We evaluated our technique on two publicly available datasets. We observed that Decision Tree and Support Vector Machine yielded significant results. We performed t-test with respect to sensitivity for the two models and found that statistically there is no significant difference between these models. We recommend Decision Tree as it yields ‘if-then’ rules, which could be used as an early warning expert system.
Keywords: Feature extraction; Grippers; Sensitivity; Support vector machines; Text mining; Trojan horses (ID#: 15-7395)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294263&isnumber=7294025

 

Szpyrka, M.; Szczur, A.; Bazan, J.G.; Dydo, L., “Extracting of Temporal Patterns from Data for Hierarchical Classifiers Construction,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 330–335, 24–26 June 2015. doi:10.1109/CYBConf.2015.7175955
Abstract: A method of automatic extracting of temporal patterns from learning data for constructing hierarchical behavioral patterns based classifiers is considered in the paper. The presented approach can be used to complete the knowledge provided by experts or to discover the knowledge automatically if no expert knowledge is accessible. Formal description of temporal patterns is provided and an algorithm for automatic patterns extraction and evaluation is described. A system for packet-based network traffic anomaly detection is used to illustrate the considered ideas.
Keywords: computer network security; data mining; learning (artificial intelligence); pattern classification; temporal logic; automatic pattern extraction; data temporal pattern extraction; hierarchical behavioral pattern; hierarchical classifier construction; knowledge discovery; learning data; packet-based network traffic anomaly detection; Clustering algorithms; Data mining; Decision trees; Entropy; Petri nets; Ports (Computers); Servers; LTL logic; feature extraction; hierarchical classifiers; network anomaly detection; temporal patterns (ID#: 15-7396)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175955&isnumber=7175890

 

Kobashi, T.; Yoshizawa, M.; Washizaki, H.; Fukazawa, Y.; Yoshioka, N.; Okubo, T.; Kaiya, H., “TESEM: A Tool for Verifying Security Design Pattern Applications by Model Testing,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–8, 13–17 April 2015. doi:10.1109/ICST.2015.7102633
Abstract: Because software developers are not necessarily security experts, identifying potential threats and vulnerabilities in the early stage of the development process (e.g., the requirement- or design-phase) is insufficient. Even if these issues are addressed at an early stage, it does not guarantee that the final software product actually satisfies security requirements. To realize secure designs, we propose extended security patterns, which include requirement-and design-level patterns as well as a new model testing process. Our approach is implemented in a tool called TESEM (Test Driven Secure Modeling Tool), which supports pattern applications by creating a script to execute model testing automatically. During an early development stage, the developer specifies threats and vulnerabilities in the target system, and then TESEM verifies whether the security patterns are properly applied and assesses whether these vulnerabilities are resolved.
Keywords: formal specification; program testing; program verification; security of data; software tools; TESEM; design-level pattern; development process; development stage; model testing; requirement-level pattern; security design pattern application verification; software product; test driven secure modeling tool; threat specification; vulnerability specification; Generators; Security; Software; Systematics; Testing; Unified modeling language (ID#: 15-7397)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102633&isnumber=7102573

 

Olabelurin, A.; Veluru, S.; Healing, A.; Rajarajan, M., “Entropy Clustering Approach for Improving Forecasting in DDoS Attacks,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 315–320, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116055
Abstract: Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.
Keywords: computer network security; digital forensics; entropy; forecasting theory; pattern clustering; DDoS attacks; FPR; IDS; Shannon-entropy concept; clustering algorithm; cyber-range simulation dataset; digital forensic analysis; distributed denial-of-service; entropy clustering approach; false positive rate; forecasting; intrusion detection system; network analysis; proactive forecast; project industrial partner; volume anomaly; Algorithm design and analysis; Clustering algorithms; Computer crime; Entropy; Feature extraction; Ports (Computers); Shannon entropy; alert management; distributed denial-of-service (DDoS) detection; k-means clustering analysis; network security; online anomaly detection (ID#: 15-7398)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116055&isnumber=7115994

 

Solic, K.; Velki, T.; Galba, T., “Empirical Study on ICT System’s Users’ Risky Behavior and Security Awareness,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp. 1356–1359, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160485
Abstract: In this study authors gathered information on ICT users from different areas in Croatia with different knowledge, experience, working place, age and gender background in order to examine today’s situation in the Republic of Croatia (n=701) regarding ICT users’ potentially risky behavior and security awareness. To gather all desired data validated Users’ Information Security Awareness Questionnaire (UISAQ) was used. Analysis outcome represent results of ICT users in Croatia regarding 6 subareas (mean of items): Usual risky behavior (x1=4.52), Personal computer maintenance (x2=3.18), Borrowing access data (x3=4.74), Criticism on security in communications (x4=3.48), Fear of losing data (x5=2.06), Rating importance of backup (x6=4.18). In this work comparison between users regarding demographic variables (age, gender, professional qualification, occupation, managing job position and institution category) is given. Maybe the most interesting information is percentage of questioned users that have revealed their password for professional e-mail system (28.8%). This information should alert security experts and security managers in enterprises, government institutions and also schools and faculties. Results of this study should be used to develop solutions and induce actions aiming to increase awareness among Internet users on information security and privacy issues.
Keywords: Internet; data privacy; electronic mail; risk analysis; security of data; ICT system; Internet users; Republic of Croatia; UISAQ; age; enterprises; experience; gender background; government institutions; institution category; job position; knowledge; occupation; personal computer maintenance; privacy issues; professional e-mail system; professional qualification; security awareness; security experts; security managers; user information security awareness questionnaire; user risky behavior; working place; Electronic mail; Government; Information security; Microcomputers; Phase change materials; Qualifications (ID#: 15-7399)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160485&isnumber=7160221

 

Bermudez, I.; Tongaonkar, A.; Iliofotou, M.; Mellia, M.; Munafò, M.M., “Automatic Protocol Field Inference for Deeper Protocol Understanding,” in IFIP Networking Conference (IFIP Networking), 2015, pp. 1–9, 20–22 May 2015. doi:10.1109/IFIPNetworking.2015.7145307
Abstract: Security tools have evolved dramatically in the recent years to combat the increasingly complex nature of attacks, but to be effective these tools need to be configured by experts that understand network protocols thoroughly. In this paper we present FieldHunter, which automatically extracts fields and infers their types; providing this much needed information to the security experts for keeping pace with the increasing rate of new network applications and their underlying protocols. FieldHunter relies on collecting application messages from multiple sessions and then applying statistical correlations is able to infer the types of the fields. These statistical correlations can be between different messages or other associations with meta-data such as message length, client or server IPs. Our system is designed to extract and infer fields from both binary and textual protocols. We evaluated FieldHunter on real network traffic collected in ISP networks from three different continents. FieldHunter was able to extract security relevant fields and infer their nature for well documented network protocols (such as DNS and MSNP) as well as protocols for which the specifications are not publicly available (such as SopCast) and from malware such as (Ramnit).
Keywords: Internet; invasive software; meta data; statistical analysis; telecommunication traffic; transport protocols; DNS; FieldHunter; ISP network; Internet protocol; Internet service provider; MSNP; Microsoft notification protocol; Ramnit; SopCast; automatic protocol field inference; binary protocol; client IP; domain name system; field extraction; malware; message length; metadata; network protocol; network traffic; protocol understanding; security tool; server IP; statistical correlation; textual protocol; Correlation; Entropy; IP networks; Protocols; Radiation detectors; Security; Servers (ID#: 15-7400)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145307&isnumber=7145285

 

Oprea, A.; Zhou Li; Ting-Fang Yen; Chin, S.H.; Alrwais, S., “Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 45–56, 22–25 June 2015. doi:10.1109/DSN.2015.14
Abstract: Recent years have seen the rise of sophisticated attacks including advanced persistent threats (APT) which pose severe risks to organizations and governments. Additionally, new malware strains appear at a higher rate than ever before. Since many of these malware evade existing security products, traditional defenses deployed by enterprises today often fail at detecting infections at an early stage. We address the problem of detecting early-stage APT infection by proposing a new framework based on belief propagation inspired from graph theory. We demonstrate that our techniques perform well on two large datasets. We achieve high accuracy on two months of DNS logs released by Los Alamos National Lab (LANL), which include APT infection attacks simulated by LANL domain experts. We also apply our algorithms to 38TB of web proxy logs collected at the border of a large enterprise and identify hundreds of malicious domains overlooked by state-of-the-art security products.
Keywords: Internet; belief networks; business data processing; data mining; graph theory; invasive software; APT infection attacks; DNS logs; LANL; Los Alamos National Lab; Web proxy logs; advanced persistent threats; belief propagation; early-stage APT infection; early-stage enterprise infection detection; large-scale log data mining; malware strains; security products; Belief propagation; Electronic mail; IP networks; Malware; Servers; System-on-chip; Advanced Persistent Threats; Belief Propagation; Data Analysis (ID#: 15-7401)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266837&isnumber=7266818

 

Antunes, N.; Vieira, M., “On the Metrics for Benchmarking Vulnerability Detection Tools,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 505–516, 22–25 June 2015. doi:10.1109/DSN.2015.30
Abstract: Research and practice show that the effectiveness of vulnerability detection tools depends on the concrete use scenario. Benchmarking can be used for selecting the most appropriate tool, helping assessing and comparing alternative solutions, but its effectiveness largely depends on the adequacy of the metrics. This paper studies the problem of selecting the metrics to be used in a benchmark for software vulnerability detection tools. First, a large set of metrics is gathered and analyzed according to the characteristics of a good metric for the vulnerability detection domain. Afterwards, the metrics are analyzed in the context of specific vulnerability detection scenarios to understand their effectiveness and to select the most adequate one for each scenario. Finally, an MCDA algorithm together with experts’ judgment is applied to validate the conclusions. Results show that although some of the metrics traditionally used like precision and recall are adequate in some scenarios, others require alternative metrics that are seldom used in the benchmarking area.
Keywords: invasive software; software metrics; MCDA algorithm; alternative metrics; benchmarking vulnerability detection tool; software vulnerability detection tool; Benchmark testing; Concrete; Context; Measurement; Security; Standards; Automated Tools; Benchmarking; Security Metrics; Software Vulnerabilities; Vulnerability Detection (ID#: 15-7402)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266877&isnumber=7266818


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Facial Recognition 2015

 

 
SoS Logo

Facial Recognition

2015


Facial recognition tools have long been the stuff of action-adventure films. In the real world, they present opportunities and complex problems being examined by researchers. The research works cited here, presented or published in 2015, address various techniques and issues, such as the use of TDM, PCA and Markov models, application of keystroke dynamics to facial thermography, multiresolution alignment, and sparse representation.



Wan, Qianwen; Panetta, Karen; Agaian, Sos, “Autonomous Facial Recognition Based on the Human Visual System,” in Imaging Systems and Techniques (IST), 2015 IEEE International Conference on, vol., no., pp. 1–6, 16–18 Sept. 2015. doi:10.1109/IST.2015.7294580
Abstract: This paper presents a real-time facial recognition system utilizing our human visual system algorithms coupled with logarithm Logical Binary Pattern feature descriptors and our region weighted model. The architecture can quickly find and rank the closest matches of a test image to a database of stored images. There are many potential applications for this work, including homeland security applications such as identifying persons of interest and other robot vision applications such as search and rescue missions. This new method significantly improves the performance of the previous Local Binary Pattern method. For our prototype application, we supplied the system testing images and found their best matches in the database of training images. In addition, the results were further improved by weighting the contribution of the most distinctive facial features. The system evaluates and selects the best matching image using the chi-squared statistic.
Keywords: Databases; Face; Face recognition; Feature extraction; Histograms; Training; Visual systems; Facial Recognition; Human Visual System; Pattern; Region Weighting; Similarity (ID#: 15-7359)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294580&isnumber=7294454

 

Wodo, W.; Zientek, S., “Biometric Linkage Between Identity Document Card and Its Holder Based on Real-Time Facial Recognition,” in Science and Information Conference (SAI), 2015, pp. 1380–1383, 28–30 July 2015. doi:10.1109/SAI.2015.7237322
Abstract: Access control systems based on RFiD cards are very popular in many companies and public institutions. There is an assumption, that if one has a card and passes verification, the access is granted. Such an approach entails few threats: using of cards being in possession of unauthorized people (stolen or just borrowed), risk of cloning card. We strongly believe that prevention is a better way to obtain desired security level. The usability of the system is essential, but it has to be pragmatic, that is why we accept in that case higher value for False Acceptance Rate, simultaneously getting lower False Rejection Rate. We aim to discourage, in significant way, any attempts of stealing or borrowing access cards from third parties or cloning them. Our solution verifies the biometric linkage between the signed facial image of the document holder embedded in personal identity document and the user facial image captured by a camera during usage of a card. In order to avoid any trials of real-time facial substitution, we introduced depth camera and infrared flash. Our goal is to compare the similarity of faces of document holder and user, but at the same time to prevent misuse of this high quality digital data. In order to support that assumption we process captured image in the reader and send it to the card for matching (match-on-card).
Keywords: authorisation; biometrics (access control); cameras; face recognition; radiofrequency identification; RFID cards; access control systems; biometric linkage; camera; false acceptance rate; false rejection rate; high quality digital data; identity document card; infrared flash; personal identity document; real-time facial recognition; real-time facial substitution; security level; signed facial image; user facial image; Ash; Cameras; Couplings; Databases; Face detection; Face recognition; Radiofrequency identification; MRTD; biometrics; credentials; facial recognition; personal data protection; smart card (ID#: 15-7360)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237322&isnumber=7237120

 

Sgroi, Amanda; Garvey, Hannah; Bowyer, Kevin; Flynn, Patrick, “Location Matters: A Study of the Effects of Environment on Facial Recognition for Biometric Security,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol. 02, pp. 1–7, 4–8 May 2015. doi:10.1109/FG.2015.7284812
Abstract: The term “in the wild” has become wildly popular in face recognition research. The term refers generally to use of datasets that are somehow less controlled or more realistic. In this work, we consider how face recognition accuracy varies according to the composition of the dataset on which the decision threshold is learned and the dataset on which performance is then measured. We identify different acquisition locations in the FRVT 2006 dataset, examine face recognition accuracy for within-environment image matching and cross-environment image matching, and suggest a way to improve biometric systems that encounter images taken in multiple locations. We find that false non-matches are more likely to occur when the gallery and probe images are acquired in different locations, and that false matches are more likely when the gallery and probe images were acquired in the same location. These results show that measurements of face recognition accuracy are dependent on environment.
Keywords: Accuracy; Face; Face recognition; Indoor environments; Lighting; Probes; Security (ID#: 15-7361)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284812&isnumber=7284806

 

Shaukat, Arslan; Aziz, Mansoor; Akram, Usman, “Facial Expression Recognition Using Multiple Feature Sets,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292981
Abstract: Over the years, human facial expression recognition has always been a challenging problem in computer vision systems. In this paper, we have worked towards recognizing facial expressions from the images given in JAFFE database. From literature, a set of features have been identified to be potentially useful for recognizing facial expressions. Therefore, we propose to use a combination of 3-different types of features i.e. Scale Invariant Features Transform (SIFT), Gabor wavelets and Discrete Cosine Transform (DCT). Some pre-processing steps have been applied before extracting these features. Support Vector Machine (SVM) with radial basis kernel function is used for classifying facial expressions. We evaluate our results on the JAFFE database under the same experimental setup followed in literature. Experimental results show that our proposed methodology gives better results in comparison with existing literature work so far.
Keywords: Databases; Discrete cosine transforms; Face; Face recognition; Feature extraction; Image recognition; Support vector machines (ID#: 15-7362)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292981&isnumber=7292885

 

Pattabhi Ramaiah, N.; Ijjina, E.P.; Mohan, C.K., “Illumination Invariant Face Recognition Using Convolutional Neural Networks,” in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, vol., no., pp. 1–4, 19–21 Feb. 2015. doi:10.1109/SPICES.2015.7091490
Abstract: Face is one of the most widely used biometric in security systems. Despite its wide usage, face recognition is not a fully solved problem due to the challenges associated with varying illumination conditions and pose. In this paper, we address the problem of face recognition under non-uniform illumination using deep convolutional neural networks (CNN). The ability of a CNN to learn local patterns from data is used for facial recognition. The symmetry of facial information is exploited to improve the performance of the system by considering the horizontal reflections of the facial images. Experiments conducted on Yale facial image dataset demonstrate the efficacy of the proposed approach.
Keywords: biometrics (access control); face recognition; neural nets; security; CNN; Yale facial image dataset; biometric; deep convolutional neural networks; horizontal reflections; illumination invariant face recognition; nonuniform illumination; security systems; Face; Face recognition; Lighting; Neural networks; Pattern analysis; Training; biometrics; convolutional neural networks; facial recognition; non-uniform illumination (ID#: 15-7363)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091490&isnumber=7091354

 

Xiaoxia Li, “Application of Biometric Identification Technology for Network Security in the Network and Information Era, Which Will Greatly Change the Life-Style of People,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 566–569, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116099
Abstract: The global spread revolution of information and information technology is playing a decisive role to social change. Internet has become the most effective way for information transmission, whose role is network security. How to guarantee network security has become a serious and worrying problem. Biometric identification technology has some advantages including universality, uniqueness, stability and hard to be stolen. Through comparing with the other methods of biometric identification technology, such as fingerprint recognition, palm recognition, facial recognition, signature recognition, iris recognition and retina recognition, gene recognition has advantages of exclusiveness, never change, convenience and a large amount of information, which is thought to be the most important method of biometric identification technology. With the development of modern technology, the fusion of biological technology and information technology has become an inevitable trend. Biometric identification technology will necessarily replace the traditional identification technology and greatly change the life-style of people in the near future.
Keywords: Internet; biometrics (access control); security of data; biological technology; biometric identification technology; facial recognition; fingerprint recognition; gene recognition; information technology; information transmission; iris recognition; network security; palm recognition; retina recognition; signature recognition; social change; Computer viruses; DNA; Encyclopedias; Internet; Servers; traditional identification technology (ID#: 15-7364)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116099&isnumber=7115994

 

Reney, Dolly; Tripathi, Neeta, “An Efficient Method to Face and Emotion Detection,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 493–497, 4–6 April 2015. doi:10.1109/CSNT.2015.155
Abstract: Face detection and emotion selection is the one of the current topic in the security field which provides solution to various challenges. Beside traditional challenges in captured facial images under uncontrolled settings such as varying poses, different lighting and expressions for face recognition and different sound frequencies for emotion recognition. For the any face and emotion detection system database is the most important part for the comparison of the face features and sound Mel frequency components. For database creation features of the face are calculated and these features are store in the database. This database is then use for the evaluation of the face and emotion by using different algorithms. In this paper we are going implement an efficient method to create face and emotion feature database and then this will be used for face and emotion recognition of the person. For detecting face from the input image we are using Viola-Jones face detection algorithm and to evaluate the face and emotion detection KNN classifier is used.
Keywords: Classification algorithms; Databases; Detectors; Face; Face detection; Face recognition; Feature extraction; Face Detection; Facial Expression Recognition; Feature Extraction; KNN Classifier; Mel Frequency Component; Viola-Jones algorithm (ID#: 15-7365)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279967&isnumber=7279856

 

Khare, S., “Finger Gesture and Pattern Recognition Based Device Security System,” in Signal Processing and Communication (ICSC), 2015 International Conference on, vol., no., pp. 443–447, 16–18 March 2015. doi:10.1109/ICSPCom.2015.7150694
Abstract: This research aims at introduction of a hand gesture recognition based system to recognize real time gestures in natural environment and compare patterns with image database for matching of image pairs to trigger unlocking of mobile devices. The efforts made in this direction during past relating to security systems for mobile devices has been a major concern and methods like draw pattern unlock, passcodes, facial and voice recognition technologies have already been employed to a fair level of extent, but these are quiet susceptible to hacks and greater ratio of recognition failure errors (especially in cases of voice and facial). A next step in HMI would be use of fingertip tracking based unlocking mechanism, which would employ minimalistic hardware like webcam or smartphone front camera. Image acquisition through MATLAB is followed up by conversion to grayscale and application of optimal filter for edge detection utilized in different conditions for optimal results in recognizing fingertips up to a precise level of accuracy. Pattern is traced at 60 fps for tracking and tracing and therefore cross referenced with the training image by deployment of neural networks for improved recognition efficiency. Data is registered in real time and device is unlocked at instance when SSIM takes a value above predefined threshold percentage or number. The aforementioned mechanism is employed in applications via user friendly GUI frontend and computational modelling through MATLAB for backend.
Keywords: gesture recognition; image motion analysis; mobile handsets; neural nets; security; GUI frontend; MATLAB; SSIM; computational modelling; device security system; draw pattern unlock; edge detection; facial recognition technologies; failure error recognition; finger gesture; fingertip tracking; hand image acquisition; image database; image pair matching; mobile devices security systems; mobile devices unlocking; neural networks deployment; optimal filter; passcodes; pattern recognition; smartphone front camera; unlocking mechanism; voice recognition technologies; webcam; Biological neural networks; Pattern matching; Security; Training; Computer vision; HMI (Human Machine Interface); ORB;SIFT (Scale Invariant Feature Transform); SSIM (Structural Similarity Index Measure); SURF (Speed Up Robust Features) (ID#: 15-7366)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150694&isnumber=7150604

 

Taleb, I.; Mammar, M.O., “Parking Access System Based on Face Recognition,” in Programming and Systems (ISPS), 2015 12th International Symposium on, vol., no., pp. 1–5, 28–30 April 2015. doi:10.1109/ISPS.2015.7244982
Abstract: The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric face recognition technology has received significant attention. Face recognition technology, is very popular and it is used more widely because it does not require any kind of physical contact between the users and the device. Camera scans the user face and match it to a database for verification. Furthermore, it is easy to install and does not require any expensive hardware. Facial recognition technology is used widely in a variety of security systems such as physical access control or computer user accounts. In this paper, we present an access control vehicle system to the park based on camera installed at the parking entry. First, we use the nonadaptive method to detect the moving object, and we propose an algorithm to detect and recognize the face of the driver who wants to enter to the parking and verify if he is allowed. We use Viola-Jones method for face detection and we propose a new technique based on PCA, and LDA algorithm for face recognition.
Keywords: access control; face recognition; image motion analysis; object detection; principal component analysis; traffic engineering computing; LDA algorithm; PCA; Viola-Jones method; access control vehicle system; biometric face recognition technology; computer user accounts; face detection; human face; moving object detection; nonadaptive method; parking access system; physical access control; security; social interaction; Access control; Databases; Face; Face detection; Face recognition; Principal component analysis; Vehicles; Linear Discriminant Analysis (LDA); Moving object; Principal Component Analysis (PCA) (ID#: 15-7367)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7244982&isnumber=7244951

 

Mohan, M.; R. Prem Kumar; Agrawal, R.; Sharma, S.; Dutta, M.K.; Travieso, C.M.; Alonso-Hernandez, J.B., “Finger Vein Recognition Using Integrated Responses of Texture Features,” in Bioinspired Intelligence (IWOBI), 2015 4th International Work Conference on, vol., no., pp. 209–214, 10–12 June 2015. doi:10.1109/IWOBI.2015.7160168
Abstract: The finger vein recognition system is a secure and a reliable system with the advantage of robustness against malicious attacks. It is more convenient to operate this biometric feature than other biometric features such as facial and iris recognition system. The paper proposes a unique technique to find the local and the global features using Integrated Responses of Texture (IRT) features from finger veins which improves the overall accuracy of the system and is invariant to rotations. The segmentation of region of interest at different resolution levels makes the system highly efficient. The lower resolution data gives the overall global features and the higher resolution data gives the distinct local features. The complete feature set is descriptive in nature and reduces the Equal Error Rate to 0.523%. The Multi-Support Vector Machine (Multi-SVM) is used to classify and match the obtained results. The experimental results indicate that the system is highly accurate with an accuracy of 94%.
Keywords: biometrics (access control); feature extraction; fingerprint identification; image texture; security of data; support vector machines; vein recognition; IRT features; Multi-SVM; biometric feature; equal error rate; facial recognition system; finger vein recognition system; global features; integrated responses; integrated responses of texture; iris recognition system; malicious attacks; multi-support vector machine; reliable system; secure system; texture features; Accuracy; Databases; Feature extraction; Histograms; Thumb; Veins; Integrated Responses; Local Binary Pattern; Multi-Support Vector Machine (Multi- SVM); Pyramid Levels (ID#: 15-7368)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160168&isnumber=7160134

 

Matzner, S.; Heredia-Langner, A.; Amidan, B.; Boettcher, E.J.; Lochtefeld, D.; Webb, T., “Standoff Human Identification Using Body Shape,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225300
Abstract: The ability to identify individuals is a key component of maintaining safety and security in public spaces and around critical infrastructure. Monitoring an open space is challenging because individuals must be identified and re-identified from a standoff distance non-intrusively, making methods like fingerprinting and even facial recognition impractical. We propose using body shape features as a means for identification from standoff sensing, either complementing other identifiers or as an alternative. An important challenge in monitoring open spaces is reconstructing identifying features when only a partial observation is available, because of the view-angle limitations and occlusion or subject pose changes. To address this challenge, we investigated the minimum number of features required for a high probability of correct identification, and we developed models for predicting a key body feature-height-from a limited set of observed features. We found that any set of nine randomly selected body measurements was sufficient to correctly identify an individual in a dataset of 4041 subjects. For predicting height, anthropometric measures were investigated for correlation with height. Their correlation coefficients and associated linear models were reported. These results—a sufficient number of features for identification and height prediction from a single feature—contribute to developing systems for standoff identification when views of a subject are limited.
Keywords: biomedical measurement; body sensor networks; correlation methods; height measurement; probability; shape measurement; anthropometric measurement; associated linear model; body shape; correlation coefficient; facial recognition; feature reconstruction; fingerprinting; open space monitoring; probability; safety; security; sensor; standoff human identification; view-angle limitation; Correlation; Elbow; Feature extraction; Length measurement; Neck; Shape; Shoulder; anthropometrics; biometrics; feature selection (ID#: 15-7369)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225300&isnumber=7190491

 

Dong Yi; Zhen Lei; Li, Stan Z., “Shared Representation Learning for Heterogenous Face Recognition,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol.1, no., pp. 1–7, 4–8 May 2015. doi:10.1109/FG.2015.7163093
Abstract: After intensive research, heterogenous face recognition is still a challenging problem. The main difficulties are owing to the complex relationship between heterogenous face image spaces. The heterogeneity is always tightly coupled with other variations, which makes the relationship of heterogenous face images highly nonlinear. Many excellent methods have been proposed to model the nonlinear relationship, but they apt to overfit to the training set, due to limited samples. Inspired by the unsupervised algorithms in deep learning, this paper proposes a novel framework for heterogeneous face recognition. We first extract Gabor features at some localized facial points, and then use Restricted Boltzmann Machines (RBMs) to learn a shared representation locally to remove the heterogeneity around each facial point. Finally, the shared representations of local RBMs are connected together and processed by PCA. Near infrared (NIR) to visible (VIS) face recognition problem and two databases are selected to evaluate the performance of the proposed method. On CASIA HFB database, we obtain comparable results to state-of-the-art methods. On a more difficult database, CASIA NIR-VIS 2.0, we outperform other methods significantly.
Keywords: Boltzmann machines; face recognition; infrared imaging; learning (artificial intelligence); principal component analysis; CASIA HFB database; CASIA NIR-VIS 2.0; PCA; RBM; deep learning; heterogenous face image spaces; heterogenous face recognition; near infrared; nonlinear relationship; restricted Boltzmann machines; shared representation learning; training set; unsupervised algorithms; Databases; Face; Face recognition; Feature extraction; Principal component analysis; Standards; Training (ID#: 15-7370)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163093&isnumber=7163073

 

Varghese, Ashwini Ann; Cherian, Jacob P; Kizhakkethottam, Jubilant J, “Overview on Emotion Recognition System,”
in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–5, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292443
Abstract: Human emotion recognition plays an important role in the interpersonal relationship. The automatic recognition of emotions has been an active research topic from early eras. Therefore, there are several advances made in this field. Emotions are reflected from speech, hand and gestures of the body and through facial expressions. Hence extracting and understanding of emotion has a high importance of the interaction between human and machine communication. This paper describes the advances made in this field and the various approaches used for recognition of emotions. The main objective of the paper is to propose real time implementation of emotion recognition system.
Keywords: Active appearance model; Emotion recognition; Face; Face recognition; Feature extraction; Speech; Speech recognition; Active Appearance Model; Decision level function; Facial Action Encoding; Feature level fusion; Hidden Markov Model; State Sequence ML classifier; affective states (ID#: 15-7371)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292443&isnumber=7292366

 

Brady, K., “Robust Face Recognition-Based Search and Retrieval Across Image Stills and Video,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–8, 14–16 April 2015. doi:10.1109/THS.2015.7225320
Abstract: Significant progress has been made in addressing face recognition channel, sensor, and session effects in both still images and video. These effects include the classic PIE (pose, illumination, expression) variation, as well as variations in other characteristics such as age and facial hair. While much progress has been made, there has been little formal work in characterizing and compensating for the intrinsic differences between faces in still images and video frames. These differences include that faces in still images tend to have neutral expressions and frontal poses, while faces in videos tend to have more natural expressions and poses. Typically faces in videos are also blurrier, have lower resolution, and are framed differently than faces in still images. Addressing these issues is important when comparing face images between still images and video frames. Also, face recognition systems for video applications often rely on legacy face corpora of still images and associated meta data (e.g. identifying information, landmarks) for development, which are not formally compensated for when applied to the video domain. In this paper we will evaluate the impact of channel effects on face recognition across still images and video frames for the search and retrieval task. We will also introduce a novel face recognition approach for addressing the performance gap across these two respective channels. The datasets and evaluation protocols from the Labeled Faces in the Wild (LFW) still image and YouTube Faces (YTF) video corpora will be used for the comparative characterization and evaluation. Since the identities of subjects in the YTF corpora are a subset of those in the LFW corpora, this enables an apples-to-apples comparison of in-corpus and cross-corpora face comparisons.
Keywords: face recognition; pose estimation; social networking (online); video retrieval; LFW; YTF; YouTube faces; classic PIE variation; frontal poses; image retrieval; image search; image stills; labeled faces in the wild; legacy face corpora; neutral expressions; robust face recognition; video frames; Face recognition; Gabor filters; Lighting; Gabor features; computer vision; formatting; pattern recognition (ID#: 15-7372)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225320&isnumber=7190491

 

Chia-Chin Tsao; Yan-Ying Chen; Yu-Lin Hou; Hsu, W.H., “Identify Visual Human Signature in Community via Wearable Camera,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no.,
pp. 2229–2233, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178367
Abstract: With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.
Keywords: cameras; image recognition; security of data; sensor fusion; wearable computers; MCID; VHS recognition; clothing attributes; facial appearance; facial attributes; information sharing; multiview celebrity identity dataset; visual human signature; visual patches; wearable camera; wearable devices; weighted voting fusion; Clothing; Communities; Databases; Face; Feature extraction; Robustness; Visualization; Human Attributes; Visual Human Signature; Wearable Device; Weighted Voting
(ID#: 15-7373)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178367&isnumber=7177909

 

Anjusree V.K.; Darsan, Gopu, “Interactive Email System Based on Blink Detection,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1274–1277, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275788
Abstract: This is a work to develop an interactive email application. An email system is considered as a personal private property nowadays. It is not easy for people with disability to use normal devices for checking their emails. Users need more interaction with their emails. This interactive technology is based on eye blink detection; hence persons with disabilities can also efficiently use this system. The system is divided into two modules. First, in order to use such a system in a secure manner, it is vital to have a secure login module. Face recognition is used for login because it is the only option that can work as a security module with least failure rate and reliability. For face recognition fisherface algorithm is used which can perform faster, best for variations in lighting and facial expression. Second is a tracking phase, which helps to interact with the email system, after the email is loaded and is based on eye blink detection. In this phase a threshold based approach is used that can detect whether a blink occurred or not and interprets them as control commands for interacting with the system. This vibrant application helps people to check their emails faster in a more interactive way without touching any device.
Keywords: Algorithm design and analysis; Computers; Correlation; Electronic mail; Face; Face recognition; Feature extraction; Blink Detection; Face Detection; Fisherface (ID#: 15-7374)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275788&isnumber=7275573

 

Babutain, Khalid; Alaklobi, Saied; Alghamdi, Anwar; Sasi, Sreela, “Automated Surveillance of Computer Monitors in Labs,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1026–1030, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275745
Abstract: Object detection and recognition are still challenging and they find application in surveillance systems. University computer labs may or may not have surveillance video cameras, but even when a camera is present it may not have an intelligent software system to automatically monitor. A system for Automated Surveillance of Computer Monitors in Labs was designed and developed for automatic detection of any missing monitors. This system can also detect the person who is responsible for removing any monitor. If this is an authorized person, the system will display the name and facial image from the database of all employees. If the person is unauthorized, the system will generate an alarm and will display that person’s face, and will send an automated email instantly to the security department of the university with that facial image. The simulation results confirm that this automated system could be used for monitoring any computer labs.
Keywords: Computers; Face; Face recognition; Object detection; Security; Surveillance; Face Detection and Recognition; Missing Monitor Detection; Object Detection and Recognition; Scanning Technique; Surveillance Systems (ID#: 15-7375)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275745&isnumber=7275573

 

Alkandari, A.; Aljaber, S.J., “Principle Component Analysis Algorithm (PCA) for Image Recognition,” in Computing Technology and Information Management (ICCTIM), 2015 Second International Conference on, vol., no., pp. 76–80, 21–23 April 2015. doi:10.1109/ICCTIM.2015.7224596
Abstract: This paper aims mainly to recognize the important of algorithm computing to identify the facial image without human intervention. Life in the current era imposed on us to increase the level of security and speed in the search for information, and the most important information is the capability of recognizing and identifying a person by his face. Principle Component Analysis algorithm (PCA) is a useful statistical technique used for finding patterns in data of high dimension and that has found application in face recognition and image compression fields that are used for reduce dimension vector to better recognize images.
Keywords: data compression; face recognition; principal component analysis; PCA; dimension vector reduction; facial image identification; image compression; image recognition; person identification; person recognition; principle component analysis algorithm; security level; statistical technique; Algorithm design and analysis; Face; Face recognition; Feature extraction; Image recognition; Principal component analysis; Training; Image analysis; Principle Component Analysis algorithm (PCA) (ID#: 15-7376)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224596&isnumber=7224583

 

Shalin Eliabeth S; Thomas, Bino; Kizhakkethottam, Jubilant J; “Analysis of Effective Biometric Identification on Monozygotic Twins,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp.1–6, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292444
Abstract: One of the major challenges that the biometric detection system facing is how to distinguish between monozygotic or identical twins. However, the number of multiple births has been increasing over the past two decades. It cannot be identified based on DNA. So the lack of proper identification system will lead to many criminal cases. This paper is focused on different biometric identification technologies based on the features of face, fingerprint, palm print, iris, retina and voice for the verification of identical twins. By analyzing can realize that face detection based on facial mark is the most efficient one for the identification of identical twins. Automatic facial mark detector known as fast radial symmetry transform will help with the proper identification of different facial marks because the manual annotation of facial marks does not provide proper results. The other features (finger print, palm print, iris, and retina, etc.) are not unique for identical twins.
Keywords: Face; Face recognition; Fingerprint recognition; Iris recognition; Retina; Transforms; Biometric Identification; Face detection; Facial mark detection; Monozygotic twins; Multibiometric system (ID#: 15-7377)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292444&isnumber=7292366

 

Zhencheng Hu; Uchida, Naoko; Yanming Wang; Yanchao Dong, “Face Orientation Estimation for Driver Monitoring with a Single Depth Camera,” in Intelligent Vehicles Symposium (IV), 2015 IEEE, pp. 958–963, 28 June–01 July 2015. doi:10.1109/IVS.2015.7225808
Abstract: Careless driving is a major factor in most traffic accidents. In the last decade, research on estimation of face orientation and tracking of facial features through consequence images have shown very promising results of determining the level of driver’s concentration. Image sources could be monochrome camera, stereo camera or depth camera. In this paper, we propose a novel approach of facial features and face orientation detection by using a single uncalibrated depth camera by switching IR depth pattern emitter. With this simple setup, we are able to obtain both depth image and infrared image in a continuously alternating grab mode. Infrared images are employed for facial features detection and tracking while depth information is used for face region detection and face orientation estimation. This system is not utilized only for driver monitoring system, but also other human interface system such as security and avatar systems.
Keywords: accidents; estimation theory; face recognition; feature extraction; monitoring; traffic engineering computing; careless driving; consequence images; driver monitoring system; face orientation estimation; facial features detection; monochrome camera; single depth camera; traffic accidents; Cameras; Estimation; Face; Facial features; Feature extraction; Lighting; Mouth
(ID#: 15-7378)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225808&isnumber=7225648

 

Xun Gong; Zehua Fu; Xinxin Li; Lin Feng, “A Two-Stage Estimation Method for Depth Estimation of Facial Landmarks,” in Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, vol., no., pp. 1–6, 23–25 March 2015. doi:10.1109/ISBA.2015.7126355
Abstract: To address the problem of 3D face modeling based on a set of landmarks on images, the traditional feature-based morphable model, using face class-specific information, makes direct use of these 2D points to infer a dense 3D face surface. However, the unknown depth of landmarks degrades accuracy considerably. A promising solution is to predict the depth of landmarks at first. Bases on this idea, a two-stage estimation method is proposed to compute the depth value of landmarks from two images. And then, the estimated 3D landmarks are applied to a deformation algorithm to make a precise 3D dense facial shape. Test results on synthesized images with known ground-truth show that the proposed two-stage estimation method can obtain landmarks’ depth both effectively and efficiently, and further that the reconstructed accuracy is greatly enhanced with the estimated 3D landmarks. Reconstruction results of real-world photos are rather realistic.
Keywords: face recognition; image reconstruction; 3D face modeling; 3D landmarks; deformation algorithm; dense 3D face surface; depth estimation; face class-specific information; facial landmarks; feature-based morphable model;  precise 3D dense facial shape; synthesized images; two-stage estimation method; Computational modeling; Estimation; Face; Image reconstruction; Shape; Solid modeling; Three-dimensional displays (ID#: 15-7379)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7126355&isnumber=7126341

 

Srividhya, K.; Manikanthan, S.V., “An Android Based Secure Access Control Using ARM and Cloud Computing,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1486–1489, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124833
Abstract: Biometrics in the cloud infrastructure improves the security of the system. The physical characters in biometrics are finger print, facial structure, iris pattern, voice, etc. Any of these characters are given to identify the persons and authenticate them. This paper describes the enrollment and identification for the system which allows the accessing of person’s well known by the higher officials. The physical behaviors are scanned by using android mobile phone. The enroll and recognize operations are achieved with the help of cloud computing. LPC2148 is ARM processor used for controlling the overall system. The primary goal is to achieve the best security to the system and reliable. In this system, there is no need for password.
Keywords: authorisation; biometrics (access control); cloud computing; mobile computing; smart phones; ARM processor; Android mobile phone; access control; cloud infrastructure biometrics; system security; Authentication; Cloud computing; Databases; Fingerprint recognition; Smart phones; authentication; enrollment and identification (ID#: 15-7380)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124833&isnumber=7124722

 

Bin Yang; Junjie Yan; Zhen Lei; Li, Stan Z., “Fine-Grained Evaluation on Face Detection in the Wild,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol.1, no., pp.1–7, 4–8 May 2015. doi:10.1109/FG.2015.7163158
Abstract: Current evaluation datasets for face detection, which is of great value in real-world applications, are still somewhat
out-of-date. We propose a new face detection dataset MALF (short for Multi-Attribute Labelled Faces), which contains 5,250 images collected from the Internet and ~12,000 labelled faces. The MALF dataset highlights in two main features: 1) It is the largest dataset for evaluation of face detection in the wild, and the annotation of multiple facial attributes makes it possible for fine-grained performance analysis. 2) To reveal the ‘true’ performances of algorithms in practice, MALF adopts an evaluation metric that puts stress on the recall rate at a relatively low false alarm rate. Besides providing a large dataset for face detection evaluation, this paper also collects more than 20 state-of-the-art algorithms, both from academia and industry, and conducts a fine-grained comparative evaluation of these algorithms, which can be considered as a summary of past advances made in face detection. The dataset and up-to-date results of the evaluation can be found at http: //www.cbsr.ia.ac.cn/faceevaluation/.
Keywords: Internet; face recognition; object detection; MALF; face detection dataset; fine-grained comparative evaluation; multiattribute labelled faces; multiple facial attribute annotation; recall rate; relatively low false alarm rate; Algorithm design and analysis; Benchmark testing; Detectors; Face; Face detection; Measurement; Object detection (ID#: 15-7381)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163158&isnumber=7163073


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Forward Error Correction 2015

 

 
SoS Logo

Forward Error Correction

2015


Controlling errors in data transmission in noisy or lossy circuits is a problem often solved by channel coding or forward error correction. Security resilience can be impacted by loss or noise. The articles cited here are related to this Science of Security concern. This research was presented in 2015 and recovered on October 19, 2015.



Demirdogen, Ibrahim; Lei Li; Chunxiao Chigan, “FEC Driven Network Coding Based Pollution Attack Defense in Cognitive Radio Networks,” in Wireless Communications and Networking Conference Workshops (WCNCW), 2015 IEEE, vol., no.,
pp. 259–268, 9–12 March 2015. doi:10.1109/WCNCW.2015.7122564
Abstract: Relay featured cognitive radio network scenario is considered in the absence of direct link between secondary user (SU) and secondary base station (S-BS). Being a realistic deployment use case scenario, relay node can be subjected to pollution attacks. Forward error correction (FEC) driven network coding (NC) method is employed as a defense mechanism in this paper. By using the proposed methods, pollution attack is efficiently defended. Bit error rate (BER) measurements are used to quantify network reliability. Furthermore, in the absence of any attack, the proposed method can efficiently contribute to network performance by improving BER. Simulation results underline our mechanism is superior to existing FEC driven NC methods such as low density parity check (LDPC).
Keywords: cognitive radio; error statistics; forward error correction; network coding; parity check codes; relay networks (telecommunication); telecommunication network reliability; telecommunication security; BER; FEC driven network coding based pollution attack defense; LDPC; bit error rate measurements; low density parity check; network performance; network reliability quantification; relay featured cognitive radio network scenario; secondary base station; secondary user; Bit error rate; Conferences; Forward error correction; Network coding; Pollution; Relays; Reliability (ID#: 15-7476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122564&isnumber=7122513

 

de la Fuente, A.; Lentisco, C.M.; Bellido, L.; R. Perez Leal; Pastor, E.; A. Garcia Armada., “Analysis of the Impact of FEC Techniques on a Multicast Video Streaming Service over LTE,” in Networks and Communications (EuCNC), 2015 European Conference on, vol., no., pp. 219–223, June 29 2015–July 2 2015. doi:10.1109/EuCNC.2015.7194072
Abstract: In a multicast video streaming service the same multimedia content is sent to a mass audience using only one multicast stream. In multicast video streaming over a cellular network, due to the nature of the multicast communication, from a source to multiple recipients, and due to the characteristics of the radio channel, different for each receiver, transmission errors are addressed at the application level by using Forward Error Correction (FEC) techniques. However, in order to protect the communication over the radio channel, FEC techniques are also applied at the physical layer. Another important technique to improve the communication of the radio channel is the use of a single-frequency network. This paper analyzes the performance of a video streaming service over a cellular network taking into account the combined impact of different factors that affect the transmission, both the physical deployment of the service and the two levels of FEC.
Keywords: Long Term Evolution; cellular radio; forward error correction; multicast communication; telecommunication security; video streaming; wireless channels; FEC techniques; LTE; application level; cellular network; communication protection; forward error correction techniques; multicast video streaming service; multimedia content; radio channel characteristics; single-frequency network; source-to-multiple recipients; transmission errors; Decoding; Encoding; Forward error correction; Modulation; Robustness; Streaming media; Unicast (ID#: 15-7477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194072&isnumber=7194024

 

Saravanan, R.; Saminadan, V.; Thirunavukkarasu, V., “VLSI Implementation of BER Measurement for Wireless Communication System,” in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, vol., no., pp. 1–5, 19–20 March 2015. doi:10.1109/ICIIECS.2015.7193074
Abstract: This paper presents the Bit Error Rate (BER) performance of the wireless communication system. The complexity of modern wireless communication system are increasing at fast pace. It becomes challenging to design the hardware of wireless system. The proposed system consists of MIMO transmitter and MIMO receiver along with a realistic fading channel. To make the data transmission more secure when the data are passed into channel Crypto-System with Embedded Error Control (CSEEC) is used. The system supports data security and reliability using forward error correction codes (FEC). Security is provided through the use of a new symmetric encryption algorithm, and reliability is provided by the use of FEC codes. The system aims at speeding up the encryption and encoding operations and reduces the hardware dedicated to each of these operations. The proposed system allows users to achieve more security and reliable communication. The proposed BER measurement communication system consumes low power compared to existing systems. Advantage of VLSI based BER measurement it that they can be used in the Real time applications and it provides single chip solution.
Keywords: MIMO communication; VLSI; cryptography; error statistics; fading channels; forward error correction; radio receivers; radio transmitters; telecommunication control; BER measurement communication system; CSEEC; FEC codes; MIMO receiver; MIMO transmitter; VLSI implementation; bit error rate; cryptosystem with embedded error control; data reliability; data security; data transmission; fading channel; forward error correction codes; symmetric encryption algorithm; wireless communication system; Bit error rate; Encryption; Receivers; Very large scale integration; Wireless communication; BER; Crypto-System; (ID#: 15-7478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193074&isnumber=7192777

 

Yejian Chen, “Superimposed Pilots Based Secure Communications for a System with Multiple Antenna Arrays,” in Vehicular Technology Conference (VTC Spring), 2015 IEEE 81st, vol., no., pp. 1–5, 11–14 May 2015. doi:10.1109/VTCSpring.2015.7146116
Abstract: In this paper, we investigate secure communications by introducing superimposed pilots for multiple antenna system. The superimposed pilots enable the trellis-based joint channel tracking and data detection for the user of interest. Further, by adjusting the power ratio between the data symbol and superimposed pilot symbol, the secure capacity region can be established. The user of interest can appropriately select the Forward Error Correction (FEC) code rate, to prevent any possible eavesdropping. In this paper, we present the achievable secure capacity region for multiple antenna system, and verify it via Monte Carlo simulation as well.
Keywords: Monte Carlo methods; antenna arrays; error correction codes; forward error correction; telecommunication security; FEC code rate; Monte Carlo simulation; data detection; data symbol; eavesdropping; forward error correction code; multiple antenna array; power ratio; secure capacity region; secure communication; superimposed pilot symbol; trellis-based joint channel tracking; Binary phase shift keying; Communication system security; Decoding; MIMO; Noise; Security; Wireless communication (ID#: 15-7479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146116&isnumber=7145573

 

Liang Tang; Ambrose, J.A.; Kumar, A.; Parameswaran, S., “Dynamic Reconfigurable Puncturing for Secure Wireless Communication,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 888–891, 9–13 March 2015. doi: (not provided)
Abstract: The ubiquity of wireless devices has created security concerns on the information being transferred. It is critical to protect the secret information in every layer of wireless communication to thwart any type of attacks. A dynamic reconfigurable puncturing based security mechanism, named RePunc, is proposed in this paper to provide an extra level of security at the physical layer. RePunc utilizes the puncturing feature of Forward Error Correction (FEC) to insert the secure information in the punctured positions of the standard information encoded data. The punctured patterns are dynamically changed and passed as a secret key from the sender to the receiver. An eavesdropper will not be able to detect the transmission of the secure information since the inserted secure information will be processed as channel noise by the eavesdropper’s receiver. However, the rightful receiver will be able to successfully decode the secure packets by knowingly differentiating the secure information and the standard information before the FEC decoding. A case study of RePunc implementation for WiFi communication is presented in this paper, showing the extreme high security complexity with low hardware overhead.
Keywords: computer network security; decoding; forward error correction; private key cryptography; radio receivers; software radio; ubiquitous computing; wireless LAN; wireless channels; FEC decoding; RePunc security mechanism; Wi-Fi communication; channel noise; dynamic reconfigurable puncturing; eavesdropper receiver; forward error correction; high security complexity; low hardware overhead; secret information protection; secret key cryptography; secure wireless communication; wireless devices ubiquity; Decoding; Hardware; IEEE 802.11 Standards; Random access memory; Receivers; Security (ID#: 15-7480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092511&isnumber=7092347

 

Stecklina, O.; Kornemann, S.; Grehl, F.; Jung, R.; Kranz, T.; Leander, G.; Schweer, D.; Mollus, K.; Westhoff, D., “Custom-Fit Security for Efficient and Pollution-Resistant Multicast OTA-Programming with Fountain Codes,” in Innovations for Community Services (I4CS), 2015 15th International Conference on, vol., no., pp. 1–8, 8–10 July 2015. doi:10.1109/I4CS.2015.7294492
Abstract: In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
Keywords: Ciphers; Decoding; Encoding; Programming; Protocols; Receivers; Wireless sensor networks; OTA programming; homomorphic message authentication; robust fountain codes; wireless sensor networks (ID#: 15-7481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294492&isnumber=7294473

 

Hanumanthappa, M.; Rashmi, S.; Reddy, M.V., “Metrics for Evaluating Phonetics Machine Translation in Natural Language Processing Through Modified Edit Distance Algorithm-A Naïve Approach,” in Computer Communication and Informatics (ICCCI), 2015 International Conference on, vol., no., pp. 1–7, 8–10 Jan. 2015. doi:10.1109/ICCCI.2015.7218113
Abstract: Uninhabited mistakes while writing happens are unstoppable. There are certain common errors that occur during writing such as missing letters, extra letters, disordered letters, and misspelled letters. These kind of common spelling errors are called phonetics spelling errors. These are of a major concern while dealing with phonetics. Out of various problems that the phoneticians are trying to solve, major portion of it concentrates on varieties of spelling errors. Phonetic structures are greatly emphasized based on the effectiveness, appropriateness and accuracy. In order to keep abreast with the changing and challenging trends of Natural Language Processing (NLP), it is of great importance that one should resolve the problems of spelling errors. To achieve the goal, numerous realistic and practical approaches have to be adopted that make use of spelling correction algorithms such as Edit distance, Habit distance, Soundex and Asoundex. Through the analysis of these algorithms, a new interface is put forward that calculates the Edit distance, thereby showing the overall comparative study of phonetic algorithms with the proposed modified Edit Distance algorithm. The interface computes the Edit distance between two strings in appropriate and intuitive way, contemplating with the comparisons shown in the distance table. The Results show that an average of 0.937 recall and 0.947 precision have been achieved with the F-measure 0.9417. Through these results, it is evident that the recall and F-measures are improved in the proposed Edit-Distance algorithm. The revised version of the edit distance algorithm consistently attains finer quality results in comparison with the traditional edit distance algorithm.
Keywords: natural language processing; speech processing; Asoundex; F-measures; Soundex; habit distance; modified edit distance algorithm; phonetics machine translation metrics; phonetics spelling errors; spelling correction algorithms; Accuracy; Algorithm design and analysis; Computers; Context; Informatics; Natural language processing; Writing; Ambiguity; Edit distance; Natural Language Processing (NLP); Phonetics (ID#: 15-7482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218113&isnumber=7218046

 

Brunina, D.; Porto, S.; Jain, A.; Lai, C.P.; Antony, C.; Pavarelli, N.; Rensing, M.; Talli, G.; Ossieur, P.; O'Brien, P.; Townsend, P.D., “Analysis of Forward Error Correction in the Upstream Channel of 10Gb/S Optically Amplified TDM-PONs,” in Optical Fiber Communications Conference and Exhibition (OFC), 2015, vol., no., pp. 1–3, 22–26 March 2015. doi:10.1364/OFC.2015.Th4H.3
Abstract: We experimentally investigate the performance of forward error correction operated in burst-mode using a burst-mode receiver. We show reduced error correction capability due to transients from the burst-mode receiver at the start of each burst.
Keywords: forward error correction; optical fibre amplifiers; optical receivers; passive optical networks; time division multiplexing; wavelength division multiplexing; bit rate 10 Gbit/s; burst mode receiver; burst mode switching; forward error correction; optically amplified TDM-PON; reduced error correction; upstream channel; Adaptive optics; Bit error rate; Forward error correction; Optical amplifiers; Optical attenuators; Optical filters; Passive optical networks (ID#: 15-7483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121778&isnumber=7121453

 

Wenjie Ji; Wei Zhang; Xingru Peng; Zhibin Liang, “16-Channel Two-Parallel Reed-Solomon Based Forward Error Correction Architecture for Optical Communications,” in Digital Signal Processing (DSP), 2015 IEEE International Conference on, vol., no., pp. 239–243, 21–24 July 2015. doi:10.1109/ICDSP.2015.7251867
Abstract: This paper presents a high-efficiency two-parallel Reed-Solomon (RS) decoder based on the compensated simplified reformulated inversionless Berlekamp-Massey (CS-RiBM) algorithm. To achieve high speed and low hardware complexity, the key equation solver (KES) block is designed by pipelining and folding processing. With TSMC 90nm process, the simulation results reveal that the 16-Channel proposed architecture can operate up to 625 MHz and achieve a throughput rate of 156 Gbps with a total gate count of 269,000. The area of the proposed decoder is at least 35.6% fewer with the same technology, which meets the demands of next generation short-reach optical systems.
Keywords: Reed-Solomon codes; channel coding; decoding; forward error correction; next generation networks; parallel processing; pipeline processing; telecommunication computing; 16-channel two-parallel reed-solomon based forward error correction architecture; CS-RiBM algorithm; KES block; RS decoder; TSMC process; bit rate 156 Gbit/s; compensated simplified reformulated inversionless Berlekamp-Massey algorithm; folding processing; key equation solver block; next generation short-reach optical system; optical communication; pipelining processing; size 90 nm; Clocks; Computational modeling; Computer architecture; Logic gates; folding; optical communication systems; pipelined (ID#: 15-7484)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251867&isnumber=7251315

 

Bouras, C.; Kanakis, N., “Online AL-FEC Protection over Mobile Unicast Services,” in Networks and Communications (EuCNC), 2015 European Conference on, vol., no., pp. 229–233, June 29 2015–July 2 2015. doi:10.1109/EuCNC.2015.7194074
Abstract: Forward error correction (FEC) is a method for error control of data transmission adopted in several mobile multicast standards. FEC is a feedback free error recovery method where the sender introduces redundant data in advance with the source data enabling the recipient to recover from different arbitrary packet losses. Recently, the adoption of FEC error control method has been boosted by the introduction of powerful Application Layer FEC (AL-FEC) codes i.e., RaptorQ codes. Furthermore, several works have emerged aiming to address the efficient application of AL-FEC protection introducing deterministic or randomized online algorithms. The investigation of AL-FEC application as primary or auxiliary error protection method over mobile multicast environments is a well investigated field. However, the opportunity of utilizing the AL-FEC over mobile unicast services as the only method for error control, replacing common feedback based methods that are now considered to be obsolete, is not yet examined. In this work we provide an analysis on the feasibility of AL-FEC protection over unicast delivery utilizing online algorithms on the application of AL-FEC codes with exceptional recovery performance.
Keywords: forward error correction; mobile radio; multicast communication; telecommunication standards; RaptorQ codes; application layer FEC codes; data transmission; error control; feedback free error recovery; forward error correction; mobile multicast standards; mobile unicast services; online AL-FEC protection; Algorithm design and analysis; Forward error correction; Mobile communication; Mobile computing; Packet loss; Unicast; online algorithms (ID#: 15-7485)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194074&isnumber=7194024

 

Poornima, D.; Vijayashaarathi, S., “Streaming High Definition Video over Heterogeneous Wireless Networks (HWN),” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 199–205, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124892
Abstract: Video transmission over the heterogeneous networks faces many challenges due to available bandwidth, link delay, frame lost, throughput, reliability, network congestion. In video streaming it is important that the video stream must reach the users within allocated time and also without errors in video frames which leads to packet loss. Hence to avoid the packet loss and to enhance the Packet Delivery Ratio(PDR) and Throughput of the networks a modified Forward Error Correction mechanism was proposed by considering the feedback information (frame count, buffer status, round trip time (RTT)). Simulation results compares the performance in terms of packet delivery ratio (PDR), throughput and handover delay under various video packet rate and packet intervals.
Keywords: forward error correction; radio networks; video streaming; HWN; PDR; feedback information; heterogeneous wireless networks; high definition modified forward error correction mechanism; packet delivery ratio; video transmission; Bandwidth; Forward error correction; Receivers; Streaming media; Throughput; Wireless networks; FEC (Forward Error Correction); HDVideo; MFEC (Modified Forward Error Correction) (ID#: 15-7486)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124892&isnumber=7124722

 

Muppalla, S.; Vaddempudi, K.R., “A Novel VHDL Implementation of UART with Single Error Correction and Double Error Detection Capability,” in Signal Processing And Communication Engineering Systems (SPACES), 2015 International Conference on, vol., no., pp. 152–156, 2–3 Jan. 2015. doi:10.1109/SPACES.2015.7058236
Abstract: In an industrial working environment employing multiprocessor communication using UART, noise is likely to affect the data and data may be received with errors. This kind of error occurrence may affect the working of the system resulting in an improper control. Several existing UART designs are incorporating error detection logic. This kind of logic, if detects errors, requires retransmission of corresponding data frames which take additional time for automatic repeat request (ARQ) and retransmission of data. Linear block codes like hamming code have forward error correction (FEC) as well as error detection capability. This paper presents a novel VLSI implementation of UART designed to include (8,4) extended hamming code called SEC-DED code that can correct upto one error and detect up to two errors. This improves the noise immunity of the system optimizing the error free reception of data. The whole design is implemented in Xilinx ISE 12.3 simulator targeted to Xilinx Spartan 6 FPGA.
Keywords: Hamming codes; automatic repeat request; block codes; computer interfaces; error correction; error detection; field programmable gate arrays; forward error correction; hardware description languages; linear codes; telecommunication equipment; FEC; Hamming code; SEC-DED code; UART; VHDL; VLSI; Xilinx ISE 12.3 simulator; Xilinx Spartan 6 FPGA; automatic repeat request; data retransmission; double error detection; error detection logic; extended hamming code; linear block codes; multiprocessor communication; single error correction; Clocks; Decoding; Educational institutions; Error correction; Receivers; Registers; Transmitters; FEC (Forward Error Correction); Hamming Code; Universal Asynchronous Receiver Transmitter (UART); Xilinx ISE (ID#: 15-7487)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058236&isnumber=7058196

 

Lopacinski, L.; Nolte, J.; Buechner, S.; Brzozowski, M.; Kraemer, R., “Design and Implementation of an Adaptive Algorithm for Hybrid Automatic Repeat Request,” in Design and Diagnostics of Electronic Circuits & Systems (DDECS), 2015 IEEE 18th International Symposium on, vol., no., pp. 263–266, 22–24 April 2015. doi:10.1109/DDECS.2015.32
Abstract: Transmission efficiency is an interesting topic for data link layer developers. The overhead of protocols and coding should be reduced to a minimum. This maximizes a link throughput. This is especially important for high-speed networks, where a small degradation of efficiency will degrade the throughput by several Gbps. We describe a redundancy balancing algorithm for an adaptive hybrid automatic repeat request with Reed-Solomon coding. We introduce a testing environment, most important technical issues, and results generated on a field programmable gate array. The hybrid automatic repeat request and Reed-Solomon algorithms are explained. We provide a mathematical description, and a block diagram of the adaptation algorithm. All necessary algorithm simplifications are explained in details. The algorithm can be represented by basic operations in hardware. In most cases, it finds the optimal coding for a predefined bit error rate.
Keywords: Reed-Solomon codes; automatic repeat request; error statistics; field programmable gate arrays; Reed-Solomon coding; adaptation algorithm; adaptive algorithm design; adaptive algorithm implementation; adaptive bit error rate; block diagram; coding overhead; data link layer; field programmable gate array; high-speed networks; hybrid automatic repeat request; link throughput maximization; optimal coding; protocol overhead; redundancy-balancing algorithm; testing environment; transmission efficiency; Bit error rate; Encoding; Field programmable gate arrays; Forward error correction; Redundancy; Throughput; Wireless communication;100Gbps; FPGA; forward error correction; hybrid ARQ; reed-solomon (ID#: 15-7488)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195708&isnumber=7195649

 

Badr, Ahmed; Mahmood, Rafid; Khisti, Ashish, “Embedded MDS Codes for Multicast Streaming,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2276–2280, 14–19 June 2015. doi:10.1109/ISIT.2015.7282861
Abstract: We study low-delay streaming codes for erasure channels in point-to-point and multicast scenarios. We consider a sliding window erasure channel which captures the temporal correlation in packet losses observed in real channels. This correlation is often modelled using statistical channels such as Gilbert-Elliott channel. In the point-to-point case, we provide a new class of codes, Embedded Maximum Distance Separable (EMDS) codes, which recovers from channels introducing a mixture of burst and isolated erasures. Moreover, we propose a technique that extends point-to-point codes for the multicast scenario with two receivers that tolerate different delays, T1 and T2. The multicast codes opportunistically decode packets with short delay T1 when the channel is relatively better and with long delay T2 when the channel is worse. Simulations over multicast Gilbert-Elliott channels show that EMDS codes outperform other streaming codes for both users.
Keywords: Decoding; Delays; Encoding; Packet loss; Parity check codes; Receivers; Application Layer Forward Error Correction (AL-FEC); Burst Erasures; Correlated Packet Losses; Low-Delay Streaming Codes; Multicast Channels (ID#: 15-7489)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282861&isnumber=7282397

 

Fiondella, L.; Gokhale, S.S.; Jun-Hong Cui, “Reliability Analysis of Underwater Sensor Network Packet Transmission,” in Reliability and Maintainability Symposium (RAMS), 2015 Annual, vol., no., pp. 1–6, 26–29 Jan. 2015. doi:10.1109/RAMS.2015.7105109
Abstract: Underwater sensor networks pose unique challenges to the design of reliable communication. Due to the high bit error rates experienced in this environment, achieving a compromise between reliability and energy efficiency has become a fundamental problem. In this paper, an objective metric to analyze the reliability of various packet transmission methods available for use in underwater sensor networks is developed. Earlier frameworks comparing competing alternatives have made the simplifying assumption that loss rate is homogeneous across the entire network. Such a simplification contradicts the fact that wireless sensor networks exhibit properties such as link asymmetry and are also influenced by phenomenon like radio irregularity. In light of these realities, it is necessary to relax the existing modeling assumptions to produce a more general framework for assessing the various potential solutions. Drawing on concepts from network reliability theory, measures of network system performance are derived. Application of the framework suggests that hop-by-hop forward error correction performs better than end-to-end forward error correction and single-path forwarding.
Keywords: error statistics; forward error correction; marine communication; telecommunication network reliability; wireless sensor networks; high bit error rates; hop-by-hop forward error correction; network reliability theory; underwater sensor network packet transmission method; Energy consumption; Forward error correction; Measurement; Redundancy; Reliability theory; Routing; normalized energy consumption; packet forwarding; underwater sensor network (ID#: 15-7490)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105109&isnumber=7105053

 

Häger, C.; Amat, A.G.i.; Pfister, H.D.; Alvarado, A.; Brannstrom, F.; Agrell, E., “On Parameter Optimization for Staircase Codes,” in Optical Fiber Communications Conference and Exhibition (OFC), 2015, vol., no., pp. 1–3, 22–26 March 2015. doi: (not provided)
Abstract: We discuss the optimization of staircase code parameters based on density evolution. An extension of the original code construction is proposed, leading to codes with steeper waterfall performance.
Keywords: forward error correction; optical fibre networks; optimisation; FEC codes; density evolution; optical transport networks; parameter optimization; staircase code parameters; steeper waterfall performance; Arrays; Bit error rate; Decoding; Forward error correction; Iterative decoding; Optical fibers; Optimization (ID#: 15-7491)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121704&isnumber=7121453

 

Geisler, D.J.; Chandar, V.; Yarnall, T.M.; Stevens, M.L.; Hamilton, S.A., “Multi-Gigabit Coherent Communications Using Low-Rate FEC to Approach the Shannon Capacity Limit,” in Lasers and Electro-Optics (CLEO), 2015 Conference on, vol., no.,
pp. 1–2, 10–15 May 2015. doi:10.1364/CLEO_SI.2015.SW1M.2
Abstract: Combining a rate-¼ forward error-correcting code, a coherent receiver, and an optical phase-locked loop yields near error-free performance with 2-dB photon-per-bit sensitivity, which is <;3-dB from the Shannon limit for a rate-¼, pre-amplified, coherent receiver.
Keywords: forward error correction; optical phase locked loops; optical receivers; Shannon capacity limit; coherent receiver; low-rate FEC; multigigabit coherent communications; near error-free performance; optical phase-locked loop; photon-per-bit sensitivity; rate-¼ forward error-correcting code; Adaptive optics; Forward error correction; Optical receivers; Parity check codes; Sensitivity; Signal to noise ratio (ID#: 15-7492)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184330&isnumber=7182853

 

Vijayan, A.; Hariharan, B.; Uma, G., “Improving Video Qos in an Error Prone Wireless Network,” in Multimedia and Broadcasting (APMediaCast), 2015 Asia Pacific Conference on, vol., no., pp. 1–6, 23–25 April 2015. doi:10.1109/APMediaCast.2015.7210281
Abstract: Multimedia content over today’s internet has grown tremendously. Real time streaming of multimedia is becoming more and more important. People rely on online video for entertainment, education and communication purposes. YouTube, video conference, virtual classrooms, etc. are some of the most common applications where real time multimedia over internet has become more and more relevant. Since the end user devices are mostly mobile and use wireless technology, streaming multimedia content wirelessly has become very critical. Wireless, being a broadcast open network, is prone to interference, noise, physical obstructions, multipath fading, jamming, etc. Therefore the Quality of Service (QoS) is not very high in a wireless medium. There should be some techniques to reduce the QoS issues. This paper proposes a system to solve some of the problems that reduce the quality in video transmission. We used duplicating packet and error correction techniques to figure out better QoS performance on the wireless network. The system is simulated using Qualnet. The results show considerable improvements in the QoS parameters.
Keywords: Internet; media streaming; quality of service; QoS issues; YouTube; error prone wireless network; multimedia content; quality of service issue; real time streaming; video conference; virtual classrooms; Forward error correction; Interference; Multimedia communication; Quality of service; Streaming media; Wireless networks; Interference; VOIP; Video streaming; Virtual classroom; jamming; multipath fading (ID#: 15-7493)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210281&isnumber=7210263

 

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Fuzzy Logic and Security 2015

 

 
SoS Logo

Fuzzy Logic and Security

2015


Fuzzy logic is being used to develop a number of security systems. The articles cited here include research into fuzzy logic-based security for software- defined networks, industrial controls, intrusion response and recovery, wireless sensor networks, and more. They are relevant to cyber physical systems, residency, and metrics. These works were presented or published in 2015.



Levonevskiy, D.K.; Fatkieva, R.R.; Ryzhkov, S.R., “Network Attacks Detection Using Fuzzy Logic,” in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, vol., no., pp. 243–244, 19–21 May 2015. doi:10.1109/SCM.2015.7190470
Abstract: The aim of research is to increase the network attack detection accuracy by means of fuzzy logic. This paper considers an approach to intrusion detection using fuzzy logic. The approach is based on network monitoring of the variables characteristic of different network anomalies, such as ratio of the incoming traffic to the outgoing, packet size, etc. Every type of menace is characterized by a vector of fuzzy values describing the network state when this menace is present. These vectors constitute the fuzzy rule matrix. This article proposes computation of the integral indicator of the presence of any menace using the rule matrix.
Keywords: fuzzy logic; fuzzy set theory; matrix algebra; security of data; fuzzy rule matrix; fuzzy values; intrusion detection; network anomalies; network attacks detection; Accuracy; Computer crime; Estimation; Fuzzy logic; Information systems; Telecommunication traffic; computer networks; distributed denial of service; network security (ID#: 15-7338)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190470&isnumber=7190390

 

Pinem, A.F.A.; Setiawan, E.B., “Implementation of Classification and Regression Tree (CART) and Fuzzy Logic Algorithm for Intrusion Detection System,” in Information and Communication Technology (ICoICT ), 2015 3rd International Conference on, vol., no., pp. 266–271, 27–29 May 2015. doi:10.1109/ICoICT.2015.7231434
Abstract: Intrusion detection system is a system that can detect the presence intrusion or attack in a computer network. There are 2 type of intrusion detection system that misuse/signature detection and anomaly detection. This research use a combination of Classification and regression Tree (CART) and Fuzzy Logic method is used to detect intrusion or attack. CART is used to build rule or model that will be implemented by fuzzy inference engine. Testing process is performed using Fuzzy Logic without doing defuzzification because the resulting rule will be used as a classification. Training, testing and validation of the model is done by using KDD Cup 1999 dataset that has been through the preprocessing and cleaning data process. Accuracy testing and validation is calculated by using the confusion matrix. From several test performed, the best model is built from training 70%, the depth of tree 11 and node leaf minimum percentage 90% with an accuracy was 85,68% and average time validation was 21,92 second.
Keywords: fuzzy logic; fuzzy reasoning; matrix algebra; pattern classification; regression analysis; security of data; CART; KDD Cup 1999 dataset; anomaly detection; classification and regression tree; cleaning data process; computer network attack; confusion matrix; defuzzification; fuzzy inference engine; fuzzy logic algorithm; fuzzy logic method; intrusion detection system; misuse detection; presence intrusion; signature detection; testing process; Conferences; Anomaly Detection; Classification and Regression Tree; Fuzzy Logic; Intrusion Detection System; Misuse Detection (ID#: 15-7339)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231434&isnumber=7231384

 

Werner, G.A., “Fuzzy Logic Adapted Controller System for Biometrical Identification in Highly-Secured Critical Infrastructures,” in Applied Computational Intelligence and Informatics (SACI), 2015 IEEE 10th Jubilee International Symposium on, vol., no., pp. 335–340, 21–23 May 2015. doi:10.1109/SACI.2015.7208224
Abstract: In this paper I presented an algorithm which could be used as a controller technique for access systems in highly-secured environments. Highly-secured environments like critical infrastructures have to count less significant but more effective risks, which may lead to a fatal effect in the society. More stringent regulations involve more reliable access systems, which use mostly multimodal biometrical identification. Soft-computing techniques, especially fuzzy logic is suitable as a controller algorithm in a multimodal biometrical identification system. To highlight the advantages of this method, I modeled a comparison between the statistical mean value calculation and the fuzzy logic adapted controller.
Keywords: biometrics (access control); critical infrastructures; fuzzy logic; statistical analysis; fuzzy logic adapted controller system; highly-secured critical infrastructure; multimodal biometrical identification system; reliable access systems; soft-computing technique; statistical mean value calculation; Biometrics (access control); Control systems; Firing; Fuzzy logic; Fuzzy sets; Risk management; Security; artificial intelligence; multimodal biometric identification; soft-computing (ID#: 15-7340)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208224&isnumber=7208165

 

Tillapart, P.; Thumthawatworn, T.; Viriyaphol, P.; Santiprabhob, P., “Intelligent Handover Decision Based on Fuzzy Logic for Heterogeneous Wireless Networks,” in Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, vol., no., pp. 1–6, 24–27 June 2015. doi:10.1109/ECTICon.2015.7207076
Abstract: An intelligent handover decision system (HDS) is essential to heterogeneous wireless mobile networks in order to fulfill user’s expectations in terms of universal and seamless services. With emerging real-time services in heterogeneous networking environment, including multiple QoS parameters in handover decision process seems essential. In this paper, fuzzy logic is applied to enhance the intelligence of HDS. A new fuzzy-based HDS design with the aim to reduce design complexity of fuzzy engine without sacrificing handover decision performance is proposed in the paper. The results show that, compared to non-fuzzy-based (i.e., SAW and AHP) and existing fuzzy-based decision techniques, the network selection performance of proposed HDS design is significantly better than SAW and AHP, and is superior to an existing fuzzy-based technique. The proposed HDS design is then enhanced by incorporating an adaptive mechanism enabling a further improvement in terms of the network selection performance.
Keywords: fuzzy logic; mobility management (mobile radio); quality of service; telecommunication computing; Fuzzy Logic; HDS; adaptive mechanism; fuzzy engine design complexity reduction; heterogeneous wireless mobile network; intelligent handover decision system; multiple QoS parameter; network selection performance; Artificial neural networks; Engines; Quality of service; Security; WiMAX; Wireless LAN; Wireless networks (ID#: 15-7341)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207076&isnumber=7206924

 

Pharande, S.; Pawar, P.; Wani, P.W.; Patki, A.B., “Application of Hurst Parameter and Fuzzy Logic for Denial of Service Attack Detection,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 834–838, 12–13 June 2015. doi:10.1109/IADCC.2015.7154823
Abstract: Normal legitimate network traffic on both LANs and wide area IP networks has self-similarity feature i.e. scale invariance property. Superimposition of legitimate traffic and high intensity non-self-similar traffic results into degradation in self-similarity of normal traffic. Rescaled range method is used to calculate Hurst parameter and its deviation from normal value. Two inputs and one output fuzzy logic block is used to determine the intensity of Denial of Service (DoS) attack. In order to detect self-similarity, we have used synthetic self-similar data generated using Fractional Gaussian Noise process and to identify existence of Denial of Service, DARPA IDS evaluation dataset is used. C code for statistical method is implemented on DSP Processor TMS320C6713 platform.
Keywords: Gaussian noise; IP networks; computer network security; digital signal processing chips; fuzzy logic; C code; DARPA; DSP processor TMS320C6713 platform; DoS; Hurst parameter; IDS evaluation dataset; LAN; denial of service attack detection fractional Gaussian noise process; high intensity nonself-similar traffic; legitimate network traffic; rescaled range method; scale invariance property; self-similarity feature; statistical method; wide area IP networks; Algorithm design and analysis; Computational modeling; Computer crime; Correlation; Digital signal processing; Fuzzy logic; Telecommunication traffic; Denial of Service; Fuzzy Logic; Networks; Self-similarity (ID#: 15-7342)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154823&isnumber=7154658

 

Sridhar, M.; Vaidya, S.; Yawalkar, P., “Intrusion Detection Using Keystroke Dynamics & Fuzzy Logic Membership Functions,” in Technologies for Sustainable Development (ICTSD), 2015 International Conference on, vol., no., pp. 1–10, 4–6 Feb. 2015. doi:10.1109/ICTSD.2015.7095873
Abstract: If the password is compromised, either due it being weak or someone getting to know it through other means, the system cannot detect it. To overcome this problem, we propose a system whereby the system can detect whether the current user is the authorized user, a substitute user or an intruder pretending to be a valid user. Therefore the system checks the identity of the user by their behaviour pattern using keystrokes dynamics to authenticate user. A number of samples of login and password attempts of each user is gathered and stored in a database. From the samples collected, keystroke patterns are derived called feature sets and signatures are formed for each user using Fuzzy Logic algorithms. Once signatures are formed, users are authenticated by comparing their typing pattern to the respective signatures formed. We study the performance of such a system based on features like False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus evaluating the efficiency of the system.
Keywords: authorisation; fuzzy logic; fuzzy set theory; message authentication; FAR; FRR; false acceptance rate; false rejection rate; feature sets; fuzzy logic membership function; intrusion detection; keystroke dynamics; typing pattern; user authentication; Computers; Fuzzy logic; Intrusion detection; Mathematical model; Standards; Sustainable development; Timing; Keystroke dynamics; biometrics; computer security; continuous authentication system; continuous biometric authentication; feature selection; user typing behaviour; user-independent threshold (ID#: 15-7343)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7095873&isnumber=7095833

 

Inaba, T.; Elmazi, D.; Yi Liu; Sakamoto, S.; Barolli, L.; Uchida, K., “Integrating Wireless Cellular and Ad-Hoc Networks Using Fuzzy Logic Considering Node Mobility and Security,” in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, vol., no., pp. 54–60, 24–27 March 2015. doi:10.1109/WAINA.2015.116
Abstract: Several solutions have been proposed for improving the Quality of Service (QoS) in wireless cellular networks, such as Call Admission Control (CAC) and handover strategies. However, none of them considers the usage of different interfaces for different conditions. In this work, we propose a Fuzzy-Based Multi-Interface System (FBMIS), where each node is equipped with two interfaces: the traditional cellular network interface and Mobile Ad hoc Networks (MANET)interface. The proposed FBMIS system is able to switch from cellular to ad-hoc mode and vice versa. We consider four input parameters: Distance Between Nodes (DBN), Node Mobility (NM), Angle between Node and Base station (ANB), and User Request Security (URS). We evaluated the performance of the proposed system by computer simulations using MATLAB. The simulation results show that our system has a good performance.
Keywords: cellular radio; fuzzy logic; mobile ad hoc networks; mobility management (mobile radio); quality of service; telecommunication congestion control; telecommunication security; ANB; CAC; DBN; FBMIS system; MANET; Matlab; NM; QoS; URS; angle between node and base station; call admission control; cellular network interface; distance between node; fuzzy-based multiinterface system; handover strategy; mobile ad hoc network; node mobility; user request security; wireless cellular network integration; Conferences; Fuzzy logic; Optical wavelength conversion; Security; Ad-Hoc Networks; Cellular Networks; Fuzzy Logic; Intelligent Systems (ID#: 15-7344)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096147&isnumber=7096097

 

Karakis, R.; Capraz, I.; Bilir, E.; Güler, I., “A New Method of Fuzzy Logic-Based Steganography for the Security of Medical Images,” in Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 272–275, 16–19 May 2015. doi:10.1109/SIU.2015.7129812
Abstract: Dicom (Digital Imaging and Communications in Medicine) files stores the personal data of patients in file headers. The personal data of patients can be obtained illegally while archiving and transmitting Dicom files. Therefore, the personal rights of patients can also be invaded. It can be also changed the treatment of disease. This study proposes a new fuzzy logic-based steganography method for the security of medical images. It provides to select randomly the least significant bits (LSB) of image pixels. The message which combined of personal data and comment of doctor, are compressed and encrypted to prevent the attacks.
Keywords: cryptography; data compression; fuzzy logic; image coding; medical image processing; steganography; disease treatment; encryption; fuzzy logic-based steganography; image compression; image pixel; least significant bits; medical image security; patient personal data; Cryptography; DICOM; Histograms; Internet; Watermarking; Medical data security; image steganography; least significant bit (ID#: 15-7345)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129812&isnumber=7129794

 

Anikin, I.V., “Information Security Risk Assessment and Management Method in Computer Networks,” in Control and Communications (SIBCON), 2015 International Siberian Conference on, vol., no., pp. 1–5, 21–23 May 2015. doi:10.1109/SIBCON.2015.7146975
Abstract: We suggested a method for quantitative information security risk assessment and management in computer networks. We used questionnaires, expert judgments, fuzzy logic and analytic hierarchy process to evaluate an impact and possibility values for specific threats. We suggested fuzzy extension of Common Vulnerability Scoring System for vulnerability assessment. Fuzzy prediction rules are used to describe expert’s knowledge about vulnerabilities.
Keywords: analytic hierarchy process; computer network security; fuzzy logic; risk management; common vulnerability scoring system; computer network; fuzzy prediction; information security risk assessment method; information security risk management method; vulnerability assessment; Analytic hierarchy process; Fuzzy logic; Information security; Measurement; Risk management; Servers; information security risks
(ID#: 15-7346)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146975&isnumber=7146959

 

Srivastava, Shashank; Kumar, Divya; Chandra, Shuchi, “Trust Analysis of Execution Platform for Self Protected Mobile Code,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no.,
pp. 1904–1909, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275896
Abstract: Malicious host problem is still a challenging phenomenon in agent computing environment. In mobile agent computing, agent platform has full control over mobile agent to execute it. A host can analyze the code during stay of mobile agent on that host. A host can modify the mobile code for his benefits. A host can analyze and modify the data which is previously collected during agent itinerary. Hence to save the code from malicious host we need to identify it. Therefore we calculate the risk associated with that code executing on a mobile host using fuzzy logic. When a host performs an attack over the mobile agent it will take more execution time thus some risk is associated with it. If the calculated risk is greater than a user specified maximum value then the agent code is discarded and the host is identified to be malicious. In this paper, we proposed a fuzzy based risk evaluation model integrated with proposed self-protected security protocol to secure mobile code from insecure execution.
Keywords: Encryption; Fuzzy logic; Mobile agents; Mobile communication; Protocols; Mobile Agent; Risk Analysis; Trust Analysis (ID#: 15-7347)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275896&isnumber=7275573

 

Anikin, I.V.; Alnajjar, K., “Fuzzy Stream Cipher System,” in Control and Communications (SIBCON), 2015 International Siberian Conference on, vol., no., pp. 1–4, 21–23 May 2015. doi:10.1109/SIBCON.2015.7146976
Abstract: In this paper a fuzzy synchronous stream cipher system has been proposed. We have constructed system in the form of a non-linear feedback shift register to obtain a pseudo-random noise very similar to real random bits stream. The suggested system is deterministic so the same parameters of the fuzzy pseudo-random generator will give the same random bits sequence. The designed system is simple and suitable for various telecommunication ciphering applications, we introduced it in a general way. The generated pseudo-random sequence passed successfully through 15 randomness tests.
Keywords: cryptography; fuzzy set theory; cryptography; fuzzy pseudorandom generator; fuzzy synchronous stream cipher system; generated pseudorandom sequence; nonlinear feedback shift register; pseudo random noise; random bits sequence; real random bits stream; telecommunication ciphering applications; Ciphers; Fuzzy logic; Generators; NIST; Polynomials; Random sequences; Fuzzy Logic; LFSR; PN (Pseudo-random Noise); Statistical Randomness Tests; Stream cipher (ID#: 15-7348)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146976&isnumber=7146959

 

Kunjumon, Anu; Madhu, Arun; Kizhakkethottam, Jubilant J, “Comparison of Anomaly Detection Techniques in Networks,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–3, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292400
Abstract: Anomaly detection in a network is important for diagnosing attacks or failures that affect the performance and security of a network. Lately, many anomaly detection techniques have been proposed for detecting attacks whose nature is strange. A process for extracting useful features is implemented in the anomaly detection framework. Standard matrices are applied for measuring the operation of the anomaly detection algorithms. This study compares different techniques for identifying anomalies which covers a wide spectrum of anomalies.
Keywords: Data mining; Detectors; Feature extraction; Histograms; Intrusion detection; Monitoring; anomaly; clustering; fuzzy logic; histogram; intrusion detection; worm detection (ID#: 15-7349)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292400&isnumber=7292366

 

Anikin, I.V.; Zinoviev, I.P., “Fuzzy Control Based on New Type of Takagi-Sugeno Fuzzy Inference System,” in Control and Communications (SIBCON), 2015 International Siberian Conference on, vol., no., pp. 1–4, 21–23 May 2015. doi:10.1109/SIBCON.2015.7146977
Abstract: We suggested new type of fuzzy inference systems (FIS) based on Takagi-Sugeno. We called it enhanced fuzzy regression (EFR). New FIS uses fuzzy coefficients in right parts of the fuzzy rules. Fuzzy approximation theorem has been proved and learning procedure has been suggested for the EFR. We compared EFR with Mamdani FIS and concluded that EFR can be more effective for fuzzy control.
Keywords: approximation theory; fuzzy control; fuzzy reasoning; learning (artificial intelligence); regression analysis; EFR; FIS; Takagi-Sugeno fuzzy inference system; enhanced fuzzy regression; fuzzy approximation theorem; fuzzy coefficients; fuzzy rules; learning procedure; Approximation methods; Fuzzy control; Knowledge based systems; Mathematical model; Pragmatics; Takagi-Sugeno model; fuzzy logic (ID#: 15-7350)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146977&isnumber=7146959

 

Kim, Seung Wan; Jung, Young Gyo; Shin, Dong Ryeol; Youn, Hee Yong, “Dynamic Queue Management Approach for Data Integrity and Delay Differentiated Service in WSN,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1-5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292975
Abstract: Wireless sensor network (WSN) is formed by hundreds to thousands of nodes that communicate with each other updates information from time to time by passing data from one to another. This work aims to simultaneously improve the fidelity for high-integrity applications and decrease the end-to-end delay for delay-sensitive ones, even when the network is congested. To effective queue management, a queue scheduler allocates network resources by selecting a packet in the classified queue to access a single physical link with fixed capacity. We propose a dynamic queue management approach using fuzzy logic to quality of service (QoS) requirement in data integrity and delay differentiated routing (IDDR). The simulation results demonstrate that the proposed approach significantly reduces average end-to-end delay and increases packet deliver ratio and throughput compared to the existing routing algorithm.
Keywords: Delays; Fuzzy logic; Quality of service; Real-time systems; Reliability; Routing; Wireless sensor networks
(ID#: 15-7351)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292975&isnumber=7292885

 

Yi Liu; Sakamoto, S.; Matsuo, K.; Barolli, L.; Ikeda, M.; Xhafa, F., “A Fuzzy-Based Reliability System for JXTA-Overlay P2P Platform Considering Number of Authentic Files, Local Score, Number of Interactions and Security Parameters,” in Complex, Intelligent, and Software Intensive Systems (CISIS), 2015 Ninth International Conference on, vol., no., pp. 50–56, 8–10 July 2015. doi:10.1109/CISIS.2015.28
Abstract: In this paper, we propose and evaluate a new fuzzy-based reliability system for Peer-to-Peer (P2P) Communications in JXTA-Overlay platform. In our system, we considered four input parameters: Number of Authentic Files (NAF), Local Score (LS), Number of Interactions (NI) and Security (S) to decide the Peer Reliability (PR). We evaluate the proposed system by computer simulations. The simulation results have shown that the proposed system has a good performance and can choose reliable peers to connect in JXTA-Overlay platform.
Keywords: Java; fuzzy set theory; message authentication; peer-to-peer computing; software reliability; JXTA-overlay P2P platform; NAF; P2P communication; computer simulation; fuzzy-based reliability system; local score; number of authentic files; number of interactions; peer reliability; peer-to-peer communication; security; Fuzzy logic; Nickel; Peer-to-peer computing; Pragmatics; Process control; Reliability; Security; Fuzzy System; JXTA Overlay Platform; Local Score; Number of Authentic Files; Number of Interactions; P2P; Reliability (ID#: 15-7352)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185165&isnumber=7185122

 

Neelam, Sahil; Sood, S.; Mehmi, S.; Dogra, S., “Artificial Intelligence for Designing User Profiling System for Cloud Computing Security: Experiment,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 51–58, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164645
Abstract: In Cloud Computing security, the existing mechanisms: Anti-virus programs, Authentications, Firewalls are not able to withstand the dynamic nature of threats. So, User Profiling System, which registers user’s activities to analyze user’s behavior, augments the security system to work in proactive and reactive manner and provides an enhanced security. This paper focuses on designing a User Profiling System for Cloud environment using Artificial Intelligence techniques and studies behavior (of User Profiling System) and proposes a new hybrid approach, which will deliver a comprehensive User Profiling System for Cloud Computing security.
Keywords: artificial intelligence; authorisation; cloud computing; firewalls; antivirus programs; artificial intelligence techniques; authentications; cloud computing security; cloud environment; proactive manner; reactive manner; user activities; user behavior; user profiling system; Artificial intelligence; Cloud computing; Computational modeling; Fuzzy logic; Fuzzy systems; Genetic algorithms; Security; Artificial Intelligence; Artificial Neural Networks; Cloud Computing; Datacenters; Expert Systems; Genetics; Machine Learning; Multi-tenancy; Networking Systems; Pay-as-you-go Model (ID#: 15-7353)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164645&isnumber=7164643

 

FengJi Luo; ZhaoYang Dong; Guo Chen; Yan Xu; Ke Meng; YingYing Chen; KitPo Wong, “Advanced Pattern Discovery-Based Fuzzy Classification Method for Power System Dynamic Security Assessment,” in Industrial Informatics, IEEE Transactions on, vol. 11, no. 2, pp. 416–426, April 2015. doi:10.1109/TII.2015.2399698
Abstract: Dynamic security assessment (DSA) is an important issue in modern power system security analysis. This paper proposes a novel pattern discovery (PD)-based fuzzy classification scheme for the DSA. First, the PD algorithm is improved by integrating the proposed centroid deviation analysis technique and the prior knowledge of the training data set. This improvement can enhance the performance when it is applied to extract the patterns of data from a training data set. Secondly, based on the results of the improved PD algorithm, a fuzzy logic-based classification method is developed to predict the security index of a given power system operating point. In addition, the proposed scheme is tested on the IEEE 50-machine system and is compared with other state-of-the-art classification techniques. The comparison demonstrates that the proposed model is more effective in the DSA of a power system.
Keywords: fuzzy logic; power engineering computing; power system security; DSA; IEEE 50-machine system; PD-based fuzzy classification scheme; centroid deviation analysis technique; pattern discovery-based fuzzy classification method; power system dynamic security assessment; power system security analysis; security index; Algorithm design and analysis; Classification algorithms; Mathematical model; Power system dynamics; Power system stability; Security; Training data; Data Mining; Data mining; Dynamic Security Assessment; Fuzzy Control; Pattern Discovery; dynamic security assessment (DSA); fuzzy control; pattern discovery (PD) (ID#: 15-7354)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029678&isnumber=7070852

 

Ageev, S.; Kopchak, Y.; Kotenko, I.; Saenko, I., “Abnormal Traffic Detection in Networks of the Internet of Things Based on Fuzzy Logical Inference,” in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, vol., no., pp. 5–8, 19–21 May 2015. doi:10.1109/SCM.2015.7190394
Abstract: The paper proposes a traffic anomaly detection technique which could be implemented in networks of the Internet of things. It is based on using fuzzy logical inference applied to the stationary Poisson or self-similar traffic peculiar to networks of the Internet of things. The algorithms of the modified stochastic approximation and “sliding window”, included in the traffic anomaly detection technique, are suggested. Results of an experimental assessment of the technique are discussed.
Keywords: Internet; Internet of Things; fuzzy logic; fuzzy reasoning; stochastic processes; abnormal traffic detection; fuzzy logical inference; self-similar traffic; sliding window; stationary Poisson; stochastic approximation; traffic anomaly detection technique; Approximation algorithms; Approximation methods; Heuristic algorithms; Inference algorithms; Security; Stochastic processes; Telecommunication traffic; Internet of things; anomaly detection (ID#: 15-7355)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7190394&isnumber=7190390

 

Semenova, O.; Semenov, A.; Voznyak, O.; Mostoviy, D.; Dudatyev, I., “The Fuzzy-Controller for WiMax Networks,” in Control and Communications (SIBCON), 2015 International Siberian Conference on, vol., no., pp. 1–4, 21–23 May 2015. doi:10.1109/SIBCON.2015.7147214
Abstract: WiMAX is a broadband wireless last mile technology providing high speeds for long distances and offering great flexibility. Scheduling in WiMAX became one of the most important tasks, because it is responsible for distributing available resources among users. A high level of quality of service and scheduling support is one of the most interesting features of the WiMAX standard. In modern telecommunication networks the access control techniques are widely used. That is because using such devices instead of traditional ones permits to increase accuracy and reliability of control. Fuzzy systems have replaced conventional techniques in many engineering applications, especially in control systems. In the article it is suggested to use a fuzzy controller for access control in WiMAX networks that allows avoiding congestion in networks. The main objective of this work is providing an implementation of the WiMAX standard using the dynamic fuzzy logic based priority scheduler. We propose to use a fuzzy controller having three input and one output linguistic variables. Input linguistic variables of the controller are waiting time, queue length and packet size, its output variable is priority. A block diagram of the fuzzy controller was developed. Linguistic variables, terms and membership functions for input and output values have been defined. The Waiting time linguistic variable has three terms: low, medium, high. The Queue length linguistic variable has three terms: short, medium, long. The Packet size linguistic variable has three terms: small, medium, large. The rules base consisting of twenty-seven rules has been developed. The fuzzy controller has been simulated using Matlab 6.5. Results of the simulation prove the accuracy and reliability of the fuzzy-controller’s model.
Keywords: WiMax; broadband networks; fuzzy control; fuzzy systems; quality of service; queueing theory; scheduling; telecommunication congestion control; telecommunication network reliability; telecommunication security; Matlab 6.5; WiMAX network; WiMAX standard; access control technique; block diagram; broadband wireless last mile technology; congestion avoidance; control system; conventional technique; dynamic fuzzy logic based priority scheduler; fuzzy controller; fuzzy system; fuzzy-controller model; input linguistic variable; output linguistic variable; output variable; packet size linguistic variable; quality of service; queue length linguistic variable; reliability; scheduling support; telecommunication network; waiting time linguistic variable; Broadband communication; Telecommunication network reliability; WiMAX; fuzzy-controller; simulation (ID#: 15-7356)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7147214&isnumber=7146959

 

Chih-Hung Hsieh; Yu-Siang Shen; Chao-Wen Li; Jain-Shing Wu, “iF2: An Interpretable Fuzzy Rule Filter for Web Log Post-Compromised Malicious Activity Monitoring,” in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, vol., no., pp. 130–137, 24–26 May 2015. doi:10.1109/AsiaJCIS.2015.19
Abstract: To alleviate the loads of tracking web log file by human effort, machine learning methods are now commonly used to analyze log data and to identify the pattern of malicious activities. Traditional kernel based techniques, like the neural network and the support vector machine (SVM), typically can deliver higher prediction accuracy. However, the user of a kernel based techniques normally cannot get an overall picture about the distribution of the data set. On the other hand, logic based techniques, such as the decision tree and the rule-based algorithm, feature the advantage of presenting a good summary about the distinctive characteristics of different classes of data such that they are more suitable to generate interpretable feedbacks to domain experts. In this study, a real web-access log dataset from a certain organization was collected. An efficient interpretable fuzzy rule filter (iF2) was proposed as a filter to analyze the data and to detect suspicious internet addresses from the normal ones. The historical information of each internet address recorded in web log file is summarized as multiple statistics. And the design process of iF2 is elaborately modeled as a parameter optimization problem which simultaneously considers 1) maximizing prediction accuracy, 2) minimizing number of used rules, and 3) minimizing number of selected statistics. Experimental results show that the fuzzy rule filter constructed with the proposed approach is capable of delivering superior prediction accuracy in comparison with the conventional logic based classifiers and the expectation maximization based kernel algorithm. On the other hand, though it cannot match the prediction accuracy delivered by the SVM, however, when facing real web log file where the ratio of positive and negative cases is extremely unbalanced, the proposed iF2 of having optimization flexibility results in a better recall rate and enjoys one major advantage due to providing the user with an overall picture of the underlying distributions.
Keywords: Internet; data mining; fuzzy set theory; learning (artificial intelligence); neural nets; pattern classification; statistical analysis; support vector machines; Internet address; SVM; Web log file tracking; Web log post-compromised malicious activity monitoring; Web-access log dataset; decision tree; expectation maximization based kernel algorithm; fuzzy rule filter; iF2; interpretable fuzzy rule filter; kernel based techniques; log data analysis; logic based classifiers; logic based techniques; machine learning methods; malicious activities; neural network; parameter optimization problem; recall rate; rule-based algorithm; support vector machine; Accuracy; Internet; Kernel; Monitoring; Optimization; Prediction algorithms; Support vector machines; Fuzzy Rule Based Filter; Machine Learning; Parameter Optimization; Pattern Recognition; Post-Compromised Threat Identification; Web Log Analysis (ID#: 15-7357)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153947&isnumber=7153836

 

Gonzalez-Longatt, F.; Carmona-Delgado, C.; Riquelme, J.; Burgos, M.; Rueda, J.L., “Risk-Based DC Security Assessment for Future DC-Independent System Operator,” in Energy Economics and Environment (ICEEE), 2015 International Conference on, vol., no., pp. 1–8, 27–28 March 2015. doi:10.1109/EnergyEconomics.2015.7235101
Abstract: The use of multi-terminal HVDC to integrate wind power coming from the North Sea opens de door for a new transmission system model, the DC-Independent System Operator (DC-ISO). DC-ISO will face highly stressed and varying conditions that requires new risk assessment tools to ensure security of supply. This paper proposes a novel risk-based static security assessment methodology named risk-based DC security assessment (RB-DCSA). It combines a probabilistic approach to include uncertainties and a fuzzy inference system to quantify the systemic and individual component risk associated with operational scenarios considering uncertainties. The proposed methodology is illustrated using a multi-terminal HVDC system where the variability of wind speed at the offshore wind is included.
Keywords: HVDC power transmission; fuzzy reasoning; fuzzy set theory; power engineering computing; power system security; probability; wind power plants; DC-ISO; DC-independent system operator; component risk; fuzzy inference system; multiterminal HVDC system; offshore wind; probabilistic approach; risk assessment tools; risk-based DC security assessment; risk-based static security assessment methodology; wind speed variability; HVDC transmission; Indexes; Load flow; Probabilistic logic; Security; Uncertainty; Voltage control; Fuzy; Fuzzy Inference system; HVDC; multiterminal HVDC; randomness; risk; security assessment; uncertainty; wind power (ID#: 15-7358)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7235101&isnumber=7235053 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Hardware Trojan Horse Detection 2015

 

 
SoS Logo

Hardware Trojan Horse Detection

2015


Detection and neutralization of hardware-embedded Trojans is a difficult problem. Current research is attempting to find ways to develop detection methods and processes and to automate the process. This research is relevant to cyber physical systems security, resilience, and composability.  The work presented here addresses path delay, slack removal, reverse engineering, and counterfeit prevention. These papers were presented in 2015.



Flottes, M.-L.; Dupuis, S.; Ba, P.-S.; Rouzeyre, B., “On the Limitations of Logic Testing for Detecting Hardware Trojans Horses,” in Design & Technology of Integrated Systems in Nanoscale Era (DTIS), 2015 10th International Conference on, vol.,
no., pp. 1–5, 21–23 April 2015. doi:10.1109/DTIS.2015.7127362
Abstract: The insertion of malicious alterations to a circuit, referred to as Hardware Trojan Horses (HTH), is a threat considered more and more seriously in the last years. Several methods have been proposed in literature to detect the presence of such alterations. Among them, logic testing approaches consist in trying to activate potential HTHs and detect erroneous outputs by exploiting manufacturing digital test techniques. Besides the complexity of this approach due to the intrinsic stealthiness of the potential HTH, we will show that a particular HTH targeting the test infrastructure itself may jeopardize the possibility of detecting any other alterations.
Keywords: logic testing; security; HTH; digital test technique; erroneous output detection; hardware Trojan horse detection; logic testing approach; malicious alteration; Automatic test pattern generation; Clocks; Hardware; Payloads; Trojan horses; Hardware Trojan; Logic testing (ID#: 15- )
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127362&isnumber=7127334

 

Balasch, J.; Gierlichs, B.; Verbauwhede, I., “Electromagnetic Circuit Fingerprints for Hardware Trojan Detection,” in Electromagnetic Compatibility (EMC), 2015 IEEE International Symposium on, vol., no., pp. 246–251, 16–22 Aug. 2015. doi:10.1109/ISEMC.2015.7256167
Abstract: Integrated circuit counterfeits, relabeled parts and maliciously modified integrated circuits (so-called Hardware Trojan horses) are a recognized emerging threat for embedded systems in safety or security critical applications. We propose a Hardware Trojan detection technique based on fingerprinting the electromagnetic emanations of integrated circuits. In contrast to most previous work, we do not evaluate our proposal using simulations but we rather conduct experiments with an FPGA. We investigate the effectiveness of our technique in detecting extremely small Hardware Trojans located at different positions within the FPGA. In addition, we also study its robustness to the often neglected issue of variations in the test environment. The results show that our method is able to detect most of our test Hardware Trojans but also highlight the difficulty of measuring emanations of unrealistically tiny Hardware Trojans. The results also confirm that our method is sensitive to changes in the test environment.
Keywords: copy protection; embedded systems; field programmable gate arrays; integrated logic circuits; invasive software; logic testing; FPGA; electromagnetic circuit fingerprints; electromagnetic emanations; hardware Trojan detection technique; hardware Trojan horses; integrated circuit counterfeits; Field programmable gate arrays; Hardware; Integrated circuit modeling; Payloads; Probes; Trojan horses (ID#: 15-7307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7256167&isnumber=7256113

 

Karimian, N.; Tehranipoor, F.; Rahman, M.T.; Kelly, S.; Forte, D., “Genetic Algorithm for Hardware Trojan Detection with Ring Oscillator Network (RON),” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no.,
pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225334
Abstract: Securing integrated circuits against malicious modifications (i.e., hardware Trojans) is of utmost importance, as hardware Trojans may leak information and reduce reliability of electronic systems in critical applications. In this paper, we use ring oscillators (ROs) to gather measurements of ICs that may contain hardware Trojans. To distinguish between Trojan-inserted ICs and Trojan-free ICs, we investigate several classification approaches. Furthermore, we propose a novel feature selection approach based on the Genetic Algorithm (GA) and evaluate its performance compared to several popular alternatives. The proposed method is an improvement over principal component analysis (PCA) in terms of accuracy and equal error rate by 30% and 97% respectively.
Keywords: electronic engineering computing; feature selection; genetic algorithms; integrated circuit measurement; invasive software; oscillators; IC measurements; PCA; Trojan-free IC; Trojan-inserted IC; feature selection approach; genetic algorithm; hardware Trojans; integrated circuits; malicious modifications; principal component analysis; ring oscillator network; Classification algorithms; Genetic algorithms; Genetics; Principal component analysis; Receivers; Support vector machines; Trojan horses; Genetic Algorithm; Hardware Trojan Detection; One class classification; Ring Oscillator Network (ID#: 15-7308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225334&isnumber=7190491

 

Bao, C.; Forte, D.; Srivastava, A., “On Reverse Engineering-Based Hardware Trojan Detection,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 35, no. 1, pp. 49–57, Jan. 2016. doi:10.1109/TCAD.2015.2488495
Abstract: Due to design and fabrication outsourcing to foundries, the problem of malicious modifications to integrated circuits (ICs), also known as hardware Trojans (HTs), has attracted attention in academia as well as industry. To reduce the risks associated with Trojans, researchers have proposed different approaches to detect them. Among these approaches, test-time detection approaches have drawn the greatest attention. Many test-time approaches assume the existence of a Trojan-free (TF) chip/model also known as “golden model.” Prior works suggest using reverse engineering (RE) to identify such TF ICs for the golden model. However, they did not state how to do this efficiently. In fact, RE is a very costly process which consumes lots of time and intensive manual effort. It is also very error prone. In this paper, we propose an innovative and robust RE scheme to identify the TF ICs. We reformulate the Trojan-detection problem as clustering problem. We then adapt a widely used machine learning method, ${K}$ -means clustering, to solve our problem. Simulation results using state-of-the-art tools on several publicly available circuits show that the proposed approach can detect HTs with high accuracy rate. A comparison of this approach with our previously proposed approach [1] is also conducted. Both the limitations and application scenarios of the two methods are discussed in detail.
Keywords: integrated circuit modelling; integrated circuit testing; invasive software; learning (artificial intelligence); reverse engineering; Trojan-detection problem; Trojan-free chip; clustering problem; golden model; hardware trojan detection; integrated circuits; machine learning; robust RE scheme; test-time detection; Fabrication; Feature extraction; Hardware; Integrated circuits; Layout; Support vector machines; Trojan horses; ${K}$ -means clustering; Hardware Trojan (HT) detection; Hardware Trojan detection; K-Means clustering; integrated circuit (IC) security and trust; one-class SVM; one-class support vector machine (SVM); reverse-engineering (RE)-based HT detection; reverse-engineering based hardware Trojan detection (ID#: 15-7309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293657&isnumber=6917053

 

Zhou, B.; Zhang, W.; Srikanthan, T.; Teo Kian Jin, J.; Chaturvedi, V.; Luo, T., “Cost-efficient Acceleration of Hardware Trojan Detection through Fan-out Cone Analysis and Weighted Random Pattern Technique,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, 23 July 2015. doi:10.1109/TCAD.2015.2460551
Abstract: Fabless semiconductor industry and government agencies have raised serious concerns about tampering with inserting Hardware Trojans (HTs) in an integrated circuit supply chain in recent years. In this paper, a low hardware overhead acceleration method of the detection of HTs based on the insertion of 2-to-1 MUXs as test points is proposed. In the proposed method, the fact that one logical gate has a significant impact on the transition probability of the logical gates in its logical fan-out cone is utilized to optimize the number of the inserted MUXs. The nets which have smaller transition probability than the user-specified threshold and minimal logical depth from the primary inputs are selected as the candidate nets. As for each candidate net, only its input net with smallest signal probability is required to be inserted the MUXs based test points. The procedure repeats until the minimal transition probability of the entire circuit is not smaller than the threshold value. In order to further optimize the number of required insertions and reduce the overhead, the weighted random pattern technique is also applied. Experiment results on ISCAS’89 benchmark circuits show that our proposed method can achieve remarkable improvement of transition probability with on average 9.50% power, 2.37% delay, and 10.26% area penalty.
Keywords: Controllability; Delays; Flip-flops; Hardware; Integrated circuits; Logic gates; Trojan horses; Fan-out cone; Hardware Trojan; Transition probability; Weighted random pattern (ID#: 15-7310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165615&isnumber=6917053

 

Chongxi Bao; Yang Xie; Srivastava, Ankur, “A Security-Aware Design Scheme for Better Hardware Trojan Detection Sensitivity,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 52–55,
5–7 May 2015. doi:10.1109/HST.2015.7140236
Abstract: Due to the trend of outsourcing designs to foundries overseas, there has been an increasing threat of malicious modifications to the original integrated circuits (ICs), also known as hardware Trojans. Numerous countermeasures have been proposed. However, very little effort has been made to design-time strategies that help to make test-time or run-time detection of Trojans easier. In this paper, we characterize each cell’s sensitivity to malicious modifications and develop an algorithm to select a subset of standard cells for a given circuit such that Trojans are easily detected using [1] when the circuit is synthesized on it. Experiments on 8 publicly available benchmarks show that using our method, we could detect on average 16.87% more Trojans with very small power/area overhead and no timing violations.
Keywords: integrated circuits; invasive software; design-time strategies; hardware Trojan detection sensitivity; security-aware design scheme; Benchmark testing; Hardware; Integrated circuits; Libraries; Standards; Timing; Trojan horses (ID#: 15-7311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140236&isnumber=7140225

 

Bao Liu; Sandhu, Ravi., “Fingerprint-Based Detection and Diagnosis of Malicious Programs in Hardware,” Reliability, IEEE Transactions on, vol. 64, no. 3, pp.1068–1077, Sept. 2015. doi:10.1109/TR.2015.2430471
Abstract: In today’s Integrated Circuit industry, a foundry, an Intellectual Property provider, a design house, or a Computer Aided Design vendor may install a hardware Trojan on a chip which executes a malicious program such as one providing an information leaking back door. In this paper, we propose a fingerprint-based method to detect any malicious program in hardware. We propose a tamper-evident architecture (TEA) which samples runtime signals in a hardware system during the performance of a computation, and generates a cryptographic hash-based fingerprint that uniquely identifies a sequence of sampled signals. A hardware Trojan cannot tamper with any sampled signal without leaving tamper evidence such as a missing or incorrect fingerprint. We further verify fingerprints off-chip such that a hardware Trojan cannot tamper with the verification process. As a case study, we detect hardware-based code injection attacks in a SPARC V8 architecture LEON2 processor. Based on a lightweight block cipher called PRESENT, a TEA requires only a 4.5% area increase, while avoiding being detected by the TEA increases the area of a code injection hardware Trojan with a 1 KB ROM from 2.5% to 36.1% of a LEON2 processor. Such a low cost further enables more advanced tamper diagnosis techniques based on a concurrent generation of multiple fingerprints.
Keywords: cryptography; industrial property; invasive software; microprocessor chips; read-only storage; signal sampling; PRESENT; ROM; SPARC V8 architecture LEON2 processor; TEA; advanced tamper diagnosis techniques; computer aided design; cryptographic hash-based fingerprint; fingerprint-based detection method; fingerprint-based diagnosis; hardware Trojan; hardware-based code injection attack detection; integrated circuit industry; intellectual property provider; lightweight block cipher; malicious program detection; multiple fingerprint concurrent generation; runtime signal sampling; sampled signal sequence; storage capacity 1 Kbit; tamper-evident architecture; Built-in self-test; Cryptography; Hardware; Integrated circuits; Runtime; Supply chains; Trojan horses; Security; built-in self-test; integrated circuits (ID#: 15-7312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7108077&isnumber=7229405

 

Chunhua He; Bo Hou; Liwei Wang; Yunfei En; Shaofeng Xie, “A Failure Physics Model for Hardware Trojan Detection Based on Frequency Spectrum Analysis,” in Reliability Physics Symposium (IRPS), 2015 IEEE International, vol., no.,
pp. PR.1.1–PR.1.4, 19–23 April 2015. doi:10.1109/IRPS.2015.7112822
Abstract: Hardware Trojan embedded by adversaries has emerged as a serious security threat. Until now, there is no a universal method for effective and accurate detection. Since traditional analysis approaches sometime seem helpless when the Trojan area is extremely tiny, this paper will focus on the novel detection method based on frequency spectrum analysis. Meanwhile, a failure physics model is presented and depicted in detail. A digital CORDIC IP core is adopted as a golden circuit, while a counter is utilized as a Trojan circuit. The automatic test platform is set up with Xilinx FPGA, LabVIEW software, and high precision oscilloscope. The power trace of the core power supply in FPGA is monitored and saved for frequency spectrum analysis. Experimental results in time domain and frequency domain both accord with those of theoretical analysis, which verifies that the proposed failure physics model is accurate. In addition, due to immunity to vast measurement noise, the novel method processing in frequency domain is superior to the traditional method conducting in time domain. It can easily achieve about 0.1% Trojan detection sensitivity, which indicates that the novel detection method is effective.
Keywords: field programmable gate arrays; invasive software; multiprocessing systems; FPGA; LabVIEW software; Trojan area; Trojan circuit; Xilinx FPGA; automatic test platform; core power supply; digital CORDIC IP core; failure physics model; frequency spectrum analysis; golden circuit; hardware Trojan detection; novel detection method; security threat; Frequency-domain analysis; Hardware; Noise; Physics; Spectral analysis; Time-domain analysis; Trojan horses; Hardware Trojan; side-channel analysis
(ID#: 15-7313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7112822&isnumber=7112653

 

Hong Zhao; Kwiat, Kevin A.; Kamhoua, Charles A.; Rodriguez, Manuel, “Applying Chaos Theory for Runtime Hardware Trojan Detection,” in Computational Intelligence for Security and Defense Applications (CISDA), 2015 IEEE Symposium on, vol., no.,
pp. 1–6, 26–28 May 2015. doi:10.1109/CISDA.2015.7208642
Abstract: Hardware Trojans (HTs) are posing a serious threat to the security of Integrated Circuits (ICs). Detecting HT in an IC is an important but hard problem due to the wide spectrum of HTs and their stealthy nature. In this paper, we propose a runtime Trojan detection approach by applying chaos theory to analyze the nonlinear dynamic characteristics of power consumption of an IC. The observed power dissipation series is embedded into a higher dimensional phase space. Such an embedding transforms the observed data to a new processing space, which provides precise information about the dynamics involved. The feature model is then built in this newly reconstructed phase space. The overhead, which is the main challenge for runtime approaches, is reduced by taking advantage of available thermal sensors in most modern ICs. The proposed model has been tested for publicly-available Trojan benchmarks and simulation results show that the proposed scheme outperforms the state-of-the-art method using temperature tracking in terms of detection rate and computational complexity. More importantly, the proposed model does not make any assumptions about the statistical distribution of power trace and no Trojan-active data is needed, which makes it appropriate for runtime use.
Keywords: integrated circuits; invasive software; statistical analysis; HT; Trojan benchmarks; Trojan-active data; chaos theory; computational complexity; detection rate; integrated circuit security; nonlinear dynamic characteristics; phase space reconstruction; power consumption; power dissipation series; runtime Hardware Trojan detection; runtime Trojan detection approach; statistical distribution; stealthy nature; thermal sensors; Chaos; Integrated circuit modeling; Runtime; Thermal sensors; Trojan horses
(ID#: 15-7314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208642&isnumber=7208613

 

Çakir, B.; Malik, S., “Hardware Trojan Detection for Gate-Level ICs Using Signal Correlation Based Clustering,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 471–476, 9–13 March 2015. doi: (not provided)
Abstract: Malicious tampering of the internal circuits of ICs can lead to detrimental results. Insertion of Trojan circuits may change system behavior, cause chip failure or send information to a third party. This paper presents an information-theoretic approach for Trojan detection. It estimates the statistical correlation between the signals in a design, and explores how this estimation can be used in a clustering algorithm to detect the Trojan logic. Compared with the other algorithms, our tool does not require extensive logic analysis. We neither need the circuit to be brought to the triggering state, nor the effect of the Trojan payload to be propagated and observed at the output. Instead we leverage already available simulation data in this information-theoretic approach. We conducted experiments on the TrustHub benchmarks to validate the practical efficacy of this approach. The results show that our tool can detect Trojan logic with up to 100% coverage with low false positive rates.
Keywords: information theory; integrated circuit testing; invasive software; logic testing; Trojan circuits; Trojan logic; chip failure; clustering algorithm; hardware Trojan detection; integrated circuit; internal circuits; logic analysis; malicious tampering; signal correlation based clustering; statistical correlation; system behavior; Clustering algorithms; Correlation; Integrated circuit modeling; Logic gates; Payloads; Trojan horses; Wires (ID#: 15-7315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092435&isnumber=7092347

 

Kim, Lok-Won; Villasenor, John D., “Dynamic Function Verification for System on Chip Security Against Hardware-Based Attacks,” in Reliability, IEEE Transactions on, vol. 64, no. 4, pp. 1229–1242, Dec. 2015. doi:10.1109/TR.2015.2447111
Abstract: As chip designs become increasingly complex, there is a corresponding increased vulnerability to malicious circuitry that could be inserted in the design process. Such hardware Trojans can be designed to avoid pre-deployment detection, and thus to potentially launch attacks that could impede the function of the system or compromise the integrity of the data it contains. Given the near impossibility of exhaustive detection of malicious hardware during pre-deployment verification, techniques that enable post-deployment hardware integrity verification can play a vital role in system security. In this paper, we propose a system architecture for performing online verification in a manner that does not impede normal system hardware function. The proposed approach provides a comprehensive architectural design method aimed at system on chip (SoC) based hardware systems that performs run-time testing, detects run-time attacks by Trojans, mitigates them, quarantines the detected malicious hardware modules, and regenerates the lost system functions with modest cost.
Keywords: Hardware; IP networks; Registers; System-on-chip; Testing; Trojan horses; Hardware Trojan horses; online test; system architecture; system on chip (ID#: 15-7316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150432&isnumber=4378406

 

Gunti, N.B.; Lingasubramanian, K., “Efficient Static Power Based Side Channel Analysis for Hardware Trojan Detection Using Controllable Sleep Transistors,” SoutheastCon 2015, vol., no., pp. 1–6, 9–12 April 2015. doi:10.1109/SECON.2015.7132948
Abstract: Modern integrated circuits (ICs) are vulnerable to Hardware Trojans (HTs) due to the globalization of semiconductor design and fabrication process. HT is an extra circuitry which alters functionality or leaks information making military and financial sectors vulnerable to security threats. The challenge in detection of HTs lies in their clever design and placement that makes them stealthy due to rare activation. While HTs can be detected through power side channels, methodologies that rely on dynamic power, which requires activation of HTs, can prove inefficient. On the other hand, static power based methodologies, which do not require activation of HTs, will be efficient even though they suffer from lower detection sensitivity. In this work, we propose a static power based HT detection methodology where the detection sensitivity is improved by compartmentalizing the circuit, utilizing the sleep transistors used to reduce leakage power. In order to provide efficient HT detection, the power based control is overridden in such a way that only a single sleep transistor is turned ON at any given instance. Even if the Trojan is distributed across the circuit to make it stealthier, the proposed method can effectively detect it. Using the proposed method, detection sensitivity of a 3-bit comparator based HT (0.26% of the total number of gates) has increased from 0.7% to 4.43% without process variations and from 2.03% to 4.32% in the presence of process variations with just 3 controllable sleep transistors The proposed method improved the detection sensitivity of smaller Trojan (only 0.02% of the total number of gates) by 10 folds with just 15 controllable sleep transistors.
Keywords: invasive software; HT detection methodology; IC; controllable sleep transistors; detection sensitivity; dynamic power; fabrication process; financial sectors; hardware Trojan detection; integrated circuits; military sectors; power based control; semiconductor design; static power based side channel analysis; Delays; Integrated circuit modeling; Logic gates; Sensitivity; Switching circuits; Transistors; Trojan horses; Hardware Security; Hardware Trojan; Power Gating; sleep transistors; static power (ID#: 15-7317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7132948&isnumber=7132866

 

Ngo, X.-T.; Exurville, I.; Bhasin, S.; Danger, J.-L.; Guilley, S.; Najm, Z.; Rigaud, J.-B.; Robisson, B., “Hardware Trojan Detection by Delay and Electromagnetic Measurements,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 782–787, 9–13 March 2015. doi: 10.7873/DATE.2015.1103
Abstract: Hardware Trojans (HT) inserted in integrated circuits have received special attention of researchers. In this paper, we present firstly a novel HT detection technique based on path delays measurements. A delay model, which considers intra-die process variations, is established for a net. Secondly, we show how to detect HT using ElectroMagnetic (EM) measurements. We study the HT detection probability according to its size taking into account the inter-die process variations with a set of FPGA. The results show, for instance, that there is a probability greater than 95% with a false negative rate of 5% to detect a HT larger than 1.7% of the original circuit.
Keywords: delays; field programmable gate arrays; integrated circuit design; invasive software; FPGA; delay measurement; delay model; electromagnetic measurement; hardware Trojan detection; integrated circuits; path delays measurement; Clocks; Delays; Field programmable gate arrays; Noise; Registers; Trojan horses (ID#: 15-7318)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092492&isnumber=7092347

 

Rithesh, M.; Bhargav, R.B.V.; Harish, G.; Yellampalli, S., “Detection and Analysis of Hardware Trojan Using Scan Chain Method,” in VLSI Design and Test (VDAT), 2015 19th International Symposium on, vol., no., pp. 1–6, 26–29 June 2015. doi:10.1109/ISVDAT.2015.7208124
Abstract: Due to the globalization of the Integrated Circuit manufacturing industry and wide use of third party IP in the modern SoCs has opened the backdoor for Hardware Trojan insertion. The detection of Hardware Trojan is challenging because of its very rare activation mechanism and unpredictable change in the functionality of the system. This paper proposes a new hardware Trojan detection scheme using power analysis and experiments the insertion and detection of hardware Trojan using existing scan chain efficiently in ISCAS'89 benchmark circuits.
Keywords: benchmark testing; integrated circuit manufacture; invasive software; system-on-chip; IP; ISCAS'89 benchmark circuits; SoC; hardware trojan insertion; integrated circuit manufacturing industry; power analysis; scan chain; Benchmark testing; Clocks; Fabrication; Hardware; Logic gates; Radiation detectors; Trojan horses; Application Specific Integrated Circuit (ASIC); Dummy Scan F l i p Flop (DSFF); Graphical Data System II (GDSII); Integrated Circuit (IC); Register Transfer Level (RTL) SoC; Ring Oscillator Network (RON) (ID#: 15-7319)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208124&isnumber=7208044

 

Lesperance, N.; Kulkarni, S.; Kwang-Ting Cheng, “Hardware Trojan Detection Using Exhaustive Testing of k-bit Subspaces,” in Design Automation Conference (ASP-DAC), 2015 20th Asia and South Pacific, vol., no., pp. 755–760, 19–22 Jan. 2015. doi:10.1109/ASPDAC.2015.7059101
Abstract: Post-silicon hardware Trojan detection is challenging because the attacker only needs to implement one of many possible design modifications, while the verification effort must guarantee the absence of all imaginable malicious circuitry. Existing test generation strategies for Trojan detection use controllability and observability metrics to limit the modifications targeted. However, for cryptographic hardware, the n plaintext bits are ideal for an attacker to use in Trojan triggering because the size of n prohibits exhaustive testing, and all n bits have identical controllability, making it impossible to bias testing using existing methods. Our detection method addresses this difficult case by observing that an attacker can realistically only afford to use a small subset, k, of all n possible signals for triggering. By aiming to exhaustively cover all possible k subsets of signals, we guarantee detection of Trojans using less than k plaintext bits in the trigger. We provide suggestions on how to determine k, and validate our approach using an AES design.
Keywords: cryptography; integrated circuit design; security; AES design; controllability metrics; cryptographic hardware; design modifications; exhaustive testing; k-bit subspaces; malicious circuitry; observability metrics; plaintext bits; post-silicon hardware trojan detection; verification effort; Equations; Hardware; Logic gates; Observability; Testing; Trojan horses; Vectors (ID#: 15-7320)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7059101&isnumber=7058915

 

Kumar, K.S.; Chanamala, R.; Sahoo, S.R.; Mahapatra, K.K., “An Improved AES Hardware Trojan Benchmark to Validate Trojan Detection Schemes in an ASIC Design Flow,” in VLSI Design and Test (VDAT), 2015 19th International Symposium on, vol., no., pp. 1–6, 26–29 June 2015. doi:10.1109/ISVDAT.2015.7208064
Abstract: The semiconductor design industry has globalized and it is economical for the chip makers to get services from the different geographies in design, manufacturing and testing. Globalization raises the question of trust in an integrated circuit. It is for the every chip maker to ensure there is no malicious inclusion in the design, which is referred as Hardware Trojans. Malicious inclusion can occur by an in-house adversary design engineer, Intellectual Property (IP) core supplied from the third party vendor or at untrusted manufacturing foundry. Several researchers have proposed hardware Trojan detection schemes in the recent years. Trust-Hub provides Trojan benchmark circuits to verify the strength of the Trojan detection techniques. In this work, our focus is on Advanced Encryption Standard (AES) Trojan benchmarks, which is most vulnerable block cipher for Trojan attacks. All 21 Benchmarks available in Trusthub are analyzed against standard coverage driven verification practices, synthesis, DFT insertion and ATPG simulations. The analysis reveals that 19 AES benchmarks are weak and Trojan inclusion can be detected using standard procedures used in ASIC design flow. Based on the weakness observed design modification is proposed to improve the quality of Trojan benchmarks. The strength of proposed Trojan benchmarks is better than existing circuits and their original features are also preserved after design modification.
Keywords: application specific integrated circuits; cryptography; integrated circuit design; AES hardware Trojan benchmark; ASIC design flow; Trojan detection schemes; advanced encryption standard; intellectual property core; malicious inclusion; Benchmark testing; Discrete Fourier transforms; Hardware; Leakage currents; Logic gates; Shift registers;Trojan horses; AES; ASIC; Hardware Trojan; Security; Trust-Hub (ID#: 15-7321)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208064&isnumber=7208044

 

Reece, T.; Robinson, W.H., “Detection of Hardware Trojans in Third-Party Intellectual Property Using Untrusted Modules,”
in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 35, no. 3, pp. 357–366, March 2016. doi:10.1109/TCAD.2015.2459038
Abstract: During the design of an integrated circuit, there are several opportunities for adversaries to make malicious modifications or insertions to a design. These attacks, known as hardware Trojans, can have catastrophic effects on a circuit if left undetected. This paper describes a technique for identifying hardware Trojans with logic-based payloads that are hidden within third-party intellectual property. Through comparison of two similar but untrusted designs, functional differences can be identified for all possible input combinations within a window of time. This technique was tested on multiple Trojan benchmarks and was found to be very effective, both in detectability and in speed of testing. As this technique has very low costs to implement, it represents an easy way for designers to gain a level of trust in previously untrusted designs.
Keywords: Hardware; IP networks; Licenses; Payloads; Testing; Trojan horses; Wrapping (ID#: 15-7322)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163572&isnumber=6917053

 

Courbon, F.; Loubet-Moundi, P.; Fournier, J.J.A.; Tria, A., “A High Efficiency Hardware Trojan Detection Technique Based on Fast SEM Imaging,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 788–793, 9–13 March 2015. doi: (not provided)
Abstract: In the semiconductor market where more and more companies become fabless, malicious integrated circuits’ modifications are seen as possible threats. Those Hardware Trojans can have various effects and can be implemented by different entities with different means. This article includes the integration of an almost automatic Hardware Trojan detection. The latter is based on a visual inspection implemented within the integrated circuit life cycle. The proposed detection methodology is quite efficient regarding tools, user experience and time needed. A single layer of the chip is accessed and then imaged with a Scanning Electron Microscope (SEM). The acquisition of several hundred images at high magnification is automated as does the images registration. Then depending on the reference availability, one can check if any supplementary gates have been inserted in the design using a golden reference or a graphic/text design file. Depending on the reference, either basic image processing is used to compare the chip extracted image with a golden model or some pattern recognition can be used to retrieve the number of occurrences of each standard cell. The depicted methodology aims to detect any gate modification, substitution, removal or addition and so far require an invasive approach and a reference.
Keywords: image registration; inspection; integrated circuit measurement; invasive software; scanning electron microscopy; chip extracted image; fast SEM imaging; graphic-text design file; high efficiency Hardware Trojan detection technique; image acquisition; image processing; image registration; integrated circuit life cycle; pattern recognition; scanning electron microscope; semiconductor market; supplementary gates; visual inspection; Correlation; Hardware; Image processing; Logic gates; Scanning electron microscopy; Standards; Trojan horses (ID#: 15-7323)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092493&isnumber=7092347

 

Rajendran, J.; Vedula, V.; Karri, R., “Detecting Malicious Modifications of Data in Third-Party Intellectual Property Cores,”
in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2744823
Abstract: Globalization of the system-on-chip (SoC) design flow has created opportunities for rogue elements in the intellectual property (IP) vendor companies to insert malicious circuits (a.k.a. hardware Trojans) into their IPs. We propose to formally verify third party IPs (3PIPs) for unauthorized corruption of critical data such as secret key. Our approach develops properties to identify corruption of critical registers. Furthermore, we describe two attacks where computations can be performed on corrupted data without corrupting the critical register. We develop additional properties to detect such attacks. We validate our technique using Trojans in 8051 and RISC processors and AES designs from Trust-Hub.
Keywords: cryptography; industrial property; invasive software; system-on-chip; 3PIP; AES designs; RISC processors; SoC design flow; Trojans; advanced encryption standards; malicious data modification detection; system-on-chip; third party IP; third-party intellectual property cores; Clocks; Radiation detectors; Reduced instruction set computing; Registers; System-on-chip; Trojan horses (ID#: 15-7324)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167297&isnumber=7167177

 

Graves, R.; Di Natale, G.; Batina, L.; Bhasin, S.; Ege, B.; Fournaris, A.; Mentens, N.; Picek, S.; Regazzoni, F.; Rozic, V.; Sklavos, N.; Bohan Yang, “Challenges in Designing Trustworthy Cryptographic Co-Processors,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 2009–2012, 24–27 May 2015. doi:10.1109/ISCAS.2015.7169070
Abstract: Security is becoming ubiquitous in our society. However, the vulnerability of electronic devices that implement the needed cryptographic primitives has become a major issue. This paper starts by presenting a comprehensive overview of the existing attacks to cryptography implementations. Thereafter, the state-of-the-art on some of the most critical aspects of designing cryptographic co-processors are presented. This analysis starts by considering the design of asymmetrical and symmetrical cryptographic primitives, followed by the discussion on the design and online testing of True Random Number Generation. To conclude, techniques for the detection of Hardware Trojans are also discussed.
Keywords: cryptography; invasive software; microprocessor chips; random number generation; asymmetrical cryptographic primitives; cryptographic coprocessors; electronic devices; hardware Trojans; true random number generation; Elliptic curve cryptography; Hardware; Resistance; Testing; Trojan horses (ID#: 15-7325)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169070&isnumber=7168553

 

Exurville, I.; Zussa, L.; Rigaud, J.-B.; Robisson, B., “Resilient Hardware Trojans Detection Based on Path Delay Measurements,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no.,
pp. 151–156, 5–7 May 2015. doi:10.1109/HST.2015.7140254
Abstract: A Hardware Trojan is a malicious hardware modification of an integrated circuit. It could be inserted at different design steps but also during the process fabrication of the target. Due to the damages that can be caused, detection of these alterations has become a major concern. In this paper, we propose a new resilient method to detect Hardware Trojan based on path delay measurements. First, an accurate path delay model is defined. Then, path delay measurements are compared in a way that theoretically eliminate process and experimental variations effects. Finally, this proposed detection method is experimentally validated using different FPGA boards with substantial process variations. Both small sized sequential and combinatorial Hardware Trojans are implemented and successfully detected.
Keywords: field programmable gate arrays; integrated circuits; invasive software; FPGA boards; combinatorial Hardware Trojans; integrated circuit; malicious hardware modification; path delay measurements; resilient hardware Trojans detection; Delays; Field programmable gate arrays; Hardware; Mathematical model; Synchronization; Trojan horses; Hardware Trojan; delay model; process variation (ID#: 15-7326)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140254&isnumber=7140225

 

Lenox, J.; Tragoudas, S., “Towards Trojan Circuit Detection with Maximum State Transition Exploration,” in On-Line Testing Symposium (IOLTS), 2015 IEEE 21st International, vol., no., pp. 50–52, 6–8 July 2015. doi:10.1109/IOLTS.2015.7229831
Abstract: An approach for Trojan circuit detection in a finite state machine is presented. It is based on a model where long sequences of inputs that are applied to the system in the functional mode can detect if Trojan hardware is triggered with high probability. An efficient and scalable input generation algorithm for broadside tests is introduced.
Keywords: finite state machines; integrated circuit testing; invasive software; broadside tests; finite state machine; functional mode; maximum state transition exploration; trojan circuit detection; trojan hardware; Boolean functions; Conferences; Hardware; Integrated circuits; Testing; Trojan horses (ID#: 15-7327)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7229831&isnumber=7229816

 

Francq, J.; Frick, F., “Introduction to Hardware Trojan Detection Methods,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, vol., no., pp. 770–775, 9–13 March 2015. doi: (not provided)
Abstract: Hardware Trojans (HTs) are identified as an emerging threat for the integrity of Integrated Circuits (ICs) and their applications. Attackers attempt to maliciously manipulate the functionality of ICs by inserting HTs, potentially causing disastrous effects (Denial of Service, sensitive information leakage, etc.). Over the last 10 years, various methods have been proposed in literature to circumvent HTs. This article introduces the general context of HTs and summarizes the recent advances in HT detection from a French funded research project named HOMERE. Some of these results will be detailed in the related special session.
Keywords: integrated circuits; invasive software; HOMERE project; HT detection; hardware Trojan detection methods; Application specific integrated circuits; Field programmable gate arrays; Hardware; Logic testing; Production; Trojan horses (ID#: 15-7328)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092490&isnumber=7092347

 

Hoque, Tamzidul; Mustapa, Muslim; Amsaad, Fathi; Niamat, Mohammed, “Assessment of NAND Based Ring Oscillator for Hardware Trojan Detection,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, vol., no., pp. 1–4, 2–5 Aug. 2015. doi:10.1109/MWSCAS.2015.7282110
Abstract: Malicious inclusion inside integrated circuits (ICs) is a recently evolved concept in the semiconductor industry. This has become a matter of concern with the increase in outsourcing of semiconductors which are used both in military and commercial sectors. To facilitate the detection of Trojans using power based analysis, NOT gate based ring oscillator (RO) network models have been suggested in the past. It has been observed that due to the presence of process variation, environmental variation and measurement noise, a stealthy Trojan may go undetected. In this paper we study the NAND based RO as a power monitor which is more sensitive to voltage fluctuation. A RO network constituting 7 ROs is implemented using the ISCAS'85 c2670 benchmark on several Xilinx Spartan-3E FPGAs. The results demonstrate that the impact of Trojans on the frequency of nearby ROs is noticeably larger for NAND based structure compared to the NOT one, which is helpful in detection of the Trojan.
Keywords: Field programmable gate arrays; Hardware; Logic gates; Ring oscillators; Trojan horses; Hardware Trojan; IC trust; security (ID#: 15-7329)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282110&isnumber=7281994

 

Chongxi Bao; Forte, D.; Srivastava, A., “Temperature Tracking: Toward Robust Run-Time Detection of Hardware Trojans,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 34, no. 10, pp. 1577–1585, Oct. 2015. doi:10.1109/TCAD.2015.2424929
Abstract: The hardware Trojan threat has motivated development of Trojan detection schemes at all stages of the integrated circuit (IC) lifecycle. While the majority of existing schemes focus on ICs at test-time, there are many unique advantages offered by post-deployment/run-time Trojan detection. However, run-time approaches have been underutilized with prior work highlighting the challenges of implementing them with limited hardware resources. In this paper, we propose three innovative low-overhead approaches for run-time Trojan detection which exploit the thermal sensors already available in many modern systems to detect deviations in power/thermal profiles caused by Trojan activation. The first one is a local sensor-based approach that uses information from thermal sensors together with hypothesis testing to make a decision. The second one is a global approach that exploits correlation between sensors and maintains track of the ICs thermal profile using a Kalman filter (KF). The third approach incorporates leakage power into the system dynamic model and apply extended KF (EKF) to track ICs thermal profile. Simulation results using state-of-the-art tools on ten publicly available Trojan benchmarks verify that all three proposed approaches can detect active Trojans quickly and with few false positives. Among three approaches, EKF is flawless in terms of the ten benchmarks tested but would require the most overhead.
Keywords: Kalman filters; invasive software; EKF; IC test-time; KF; Kalman filter; Trojan activation; Trojan detection schemes; extended KF; integrated focus test-time; local sensor-based approach; power profiles; robust run-time detection; temperature tracking; thermal profiles; thermal sensors; Integrated circuit modeling; Power demand; Temperature sensors; Trojan horses; Extended Kalman Filter; Extended Kalman filter (EKF); Hardware Trojan; Kalman Filter; Runtime Detection; Temperature Tracking; hardware Trojan; run-time detection;  (ID#: 15-7330)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7090988&isnumber=7271134

 

Doohwang Chang; Bakkaloglu, B.; Ozev, S., “Enabling Unauthorized RF Transmission Below Noise Floor with No Detectable Impact on Primary Communication Performance,” in VLSI Test Symposium (VTS), 2015 IEEE 33rd, vol., no., pp. 1–4, 27–29 April 2015. doi:10.1109/VTS.2015.7116257
Abstract: With increasing diversity of supply chains from design to delivery, there is an increasing risk of unauthorized changes within an IC. One of the motivations for this type change is to learn important information (such as encryption keys, spreading codes) from the hardware and pass this information to a malicious party through wireless means. In order to evade detection, such unauthorized communication can be hidden within legitimate bursts of transmit signal. In this paper, we present a stealth circuit for unauthorized transmissions which can be hidden within the legitimate signal. A CDMA-based spread spectrum with a CDMA encoder is implemented with a handful of transistors. We show that the unauthorized signal does not alter the circuit performance while being easily detectable by the malicious receiver.
Keywords: code division multiple access; cryptography; encoding; radio receivers; spread spectrum communication; CDMA encoder; CDMA-based spread spectrum; encryption keys; legitimate signal; malicious party; malicious receiver; noise floor; primary communication performance; spreading codes; stealth circuit; unauthorized RF transmission; unauthorized communication; wireless means; Binary phase shift keying; Hardware; Noise; Receivers; Transmitters; Trojan horses (ID#: 15-7331)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116257&isnumber=7116233

 

Xiaotong Li; Schafer, Benjamin Carrion, “Temperature-Triggered Behavioral IPs HW Trojan Detection Method with FPGAs,” in Field Programmable Logic and Applications (FPL), 2015 25th International Conference on, vol., no., pp. 1–4, 2–4 Sept. 2015. doi:10.1109/FPL.2015.7294009
Abstract: This works targets the detection of temperature triggered HW Trojans, in particular for third party behavioral IPs (3PBIPs) given in ANSI-C. One of the biggest advantages of C-Based VLSI design is its ability to automatically generate architectures with different trade-offs by only setting different synthesis options. This work uses this property to detect temperature-triggered HW Trojan. A complete design flow is presented. It comprises two main phases: (1) In the first phase, a design space explorer generates micro-architectures with different area vs. power trade-offs automatically for the given behavioral IP. (2) The second phase, maps three of these micro-architectures with different power profiles onto a reconfigurable computing board to create a 3-way redundant system. This system combined with a majority voter scheme is used to detect if a HW Trojan is present in the behavioral IP. Having different power profiles implies that each micro-architecture has a different thermal behavior and thus will trigger the HW Trojan at different time intervals. The outputs of the three designs are compared for discrepancies at regular intervals, allowing our method to therefore exactly pinpoint the exact trigger temperature of the HW Trojan. A case study is presented showing the effectiveness of our method.
Keywords: Field programmable gate arrays; Hardware; Temperature measurement; Temperature sensors; Trojan horses (ID#: 15-7332)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7294009&isnumber=7293744

 

Rithesh, M.; Harish, G.; Yellampalli, S., “Detection and Analysis of Hardware Trojan Using Dummy Scan Flip-Flop,” in Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, vol., no., pp. 439–442, 6–8 May 2015. doi:10.1109/ICSTM.2015.7225457
Abstract: Hardware Trojan is a significant threat to the modern integrated circuits. Hardware Trojan is a modification in the circuit which can alter the functionality of the design. Due to the globalization of the Integrated Circuit manufacturing industry and the desperate use of third party IP in the system has increased the insertion of Hardware Trojan in the circuit day by day. This paper studies and experiments the insertion and detection of hardware Trojan in ISCAS'89 benchmark circuits.
Keywords: flip-flops; integrated logic circuits; invasive software; ISCAS'89 benchmark circuits; dummy scan flip-flop; hardware Trojan analysis; hardware Trojan detection; integrated circuit manufacturing industry; third party IP; Application specific integrated circuits; Hardware; IP networks; Organizations; Radiation detectors; Trojan horses; Application Specific Integrated Circuit (ASIC); Dummy Scan; Flip Flop (DSFF); Graphical Data System II (GDSII); Integrated Circuit (IC); Register Transfer Level (RTL)
(ID#: 15-7333)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225457&isnumber=7225373

 

Kiran, N.R.; Ritesh, M.; Harish, G.; Yellampalli, S., “Hardware Trojan Self-Detector,” in Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, vol., no., pp. 428–433,
6–8 May 2015. doi:10.1109/ICSTM.2015.7225455
Abstract: Hardware Trojan is a severe threat to the modern integrated circuits, which is posed by the IP business model and untrusted foundries. In most of the modern SoCs several block are licensed from third party IP vendor where the chances of Trojan insertion is high and Trojan can also be inserted in foundries which are all globalized generally untrusted. In this paper a new self-Trojan detection is proposed which detects the Trojan by analyzing the circuit responses with the golden responses.
Keywords: integrated circuit testing; system-on-chip; IP business model; SoC; Trojan insertion; circuit response; golden response; hardware Trojan self-detector; integrated circuit; self-Trojan detection; third party IP vendor; untrusted foundry; Authentication; Built-in self-test; Circuit faults; Foundries; Hardware; Trojan horses; Automatic test equipment (ATE); Circuit Under Test (CUT); Linear Feedback Shift Register (LFSR); Multiple Input Signature Register (MISR); Output response analyzer (ORA) Self-Test Using MISR/Parallel Shift Register Sequence Generator (STUMPS); Test pattern generator (TPG) (ID#: 15-7334)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225455&isnumber=7225373

 

Wu, Tony F.; Ganesan, Karthik; Hu, Y. Alexander; Wong, H.-S. Philip; Wong, Simon; Mitra, Subhasish, “TPAD: Hardware Trojan Prevention and Detection for Trusted Integrated Circuits,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on , vol. 35, no. 4, pp. 521–534, April 2016. doi:10.1109/TCAD.2015.2474373
Abstract: There are increasing concerns about possible malicious modifications of integrated circuits (ICs) used in critical applications. Such attacks are often referred to as hardware Trojans. While many techniques focus on hardware Trojan detection during IC testing, it is still possible for attacks to go undetected. Using a combination of new design techniques and new memory technologies, we present a new approach that detects a wide variety of hardware Trojans during IC testing and also during system operation in the field. Our approach can also prevent a wide variety of attacks during synthesis, place-and-route, and fabrication of ICs. It can be applied to any digital system, and can be tuned for both traditional and split-manufacturing methods. We demonstrate its applicability for both ASICs and FPGAs. Using fabricated test chips with Trojan emulation capabilities and also using simulations, we demonstrate: 1. The area and power costs of our approach can range between 7.4-165% and 7-60%, respectively, depending on the design and the attacks targeted; 2. The speed impact can be minimal (close to 0%); 3. Our approach can detect 99.998% of Trojans (emulated using test chips) that do not require detailed knowledge of the design being attacked; 4. Our approach can prevent 99.98% of specific attacks (simulated) that utilize detailed knowledge of the design being attacked (e.g., through reverse-engineering). 5. Our approach never produces any false positives, i.e., it does not report attacks when the IC operates correctly.
Keywords: Encoding; Hardware; Integrated circuits; Monitoring; Random access memory; Trojan horses; Wires; 3D Integration; Concurrent Error Detection; Hardware Security; Hardware Trojan; Randomized Codes; Reliable Computing; Resistive RAM; Split-manufacturing (ID#: 15-7335)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7229283&isnumber=6917053

 

Voyiatzis, I.; Sgouropoulou, C.; Estathiou, C., “Detecting Untestable Hardware Trojan with Non-Intrusive Concurrent On Line Testing,” in Design & Technology of Integrated Systems in Nanoscale Era (DTIS), 2015 10th International Conference on, vol., no., pp. 1-2, 21–23 April 2015. doi:10.1109/DTIS.2015.7127369
Abstract: Hardware Trojans are an emerging threat that intrudes in the design and manufacturing cycle of the chips and has gained much attention lately due to the severity of the problems it draws to the chip supply chain. Hardware Typically, hardware Trojans are not detected during the usual manufacturing testing due to the fact that they are activated as an effect of a rare event. A class of published HTs are based on the geometrical characteristics of the circuit and claim to be undetectable, in the sense that their activation cannot be detected. In this work we study the effect of continuously monitoring the inputs of the module under test with respect to the detection of HTs possibly inserted in the module, either in the design or the manufacturing stage.
Keywords: integrated circuit testing; microprocessor chips; security; HT; chip supply chain; circuit geometrical characteristics; manufacturing cycle; manufacturing stage; manufacturing testing; nonintrusive concurrent on line testing; untestable hardware trojan; Built-in self-test; Europe; Hardware; Monitoring; Radiation detectors; Trojan horses (ID#: 15-7336)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127369&isnumber=7127334

 

Bhasin, S.; Regazzoni, F., “A Survey on Hardware Trojan Detection Techniques,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 2021–2024, 24–27 May 2015. doi:10.1109/ISCAS.2015.7169073
Abstract: Hardware Trojans recently emerged as a serious issue for computer systems, especially for those used in critical applications such as medical or military. Trojan proposed so far can affect the reliability of a device in various ways. Proposed effects range from the leakage of secret information to the complete malfunctioning of the device. A crucial point for securing the overall operation of a device is to guarantee the absence of hardware Trojans. In this paper, we survey several techniques for detecting malicious modification of circuit introduced at different phases of the design flow. We also highlight their capabilities limitations in thwarting hardware Trojans.
Keywords: invasive software; hardware Trojan detection techniques; malicious modification detection; secret information leakage; Hardware; Integrated circuits; Integrated optics; Layout; Testing; Trojan horses (ID#: 15-7337)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169073&isnumber=7168553


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Honey Pots 2015

 

 
SoS Logo

Honey Pots

2015


Honeypots are traps set up to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. This short bibliography cites articles presented in 2015 about honeypot and honeynet research. They are related to the Science of Security topics of privacy, human factors, and governance.



Sadasivam, G.K.; Hota, C., “Scalable Honeypot Architecture for Identifying Malicious Network Activities,” in Emerging Information Technology and Engineering Solutions (EITES), 2015 International Conference on, vol., no., pp. 27–31, 20–21 Feb. 2015. doi:10.1109/EITES.2015.15
Abstract: Server honey pots are computer systems that hide in a network capturing attack packets. As the name goes, server honey pots are installed in server machines running a set of services. Enterprises and government organisations deploy these honey pots to know the extent of attacks on their network. Since, most of the recent attacks are advanced persistent attacks there is much research work going on in building better peripheral security measures. In this paper, the authors have deployed several honey pots in a virtualized environment to gather traces of malicious activities. The network infrastructure is resilient and provides much information about hacker’s activities. It is cost-effective and can be easily deployed in any organisation without specialized hardware.
Keywords: computer crime; computer network security; file servers; virtualisation; advanced persistent attacks; attack packets; government organisations; hacker activities; malicious network activities identification; peripheral security measures; scalable honeypot architecture; server honeypots; server machines; virtualized environment; Computer architecture; Computer hacking; IP networks; Malware; Operating systems; Ports (Computers); Servers; Dionaea; Distributed honeypots; Glastopf; HoneyD; Honeypots; J-Honeypot; Kippo; Server honeypots (ID#: 15-7516)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7083380&isnumber=7082065

 

Sokol, Pavol; Husak, Martin; Lipták, Frantisek, “Deploying Honeypots and Honeynets: Issue of Privacy,” in Availability, Reliability and Security (ARES), 2015 10th International Conference on, vol., no., pp. 397–403, 24–27 Aug. 2015. doi:10.1109/ARES.2015.91
Abstract: Honey pots and honey nets are popular tools in the area of network security and network forensics. The deployment and usage of these tools are influenced by a number of technical and legal issues, which need to be carefully considered together. In this paper, we outline privacy issues of honey pots and honey nets with respect to technical aspects. The paper discusses the legal framework of privacy, legal ground to data processing, and data collection. The analysis of legal issues is based on EU law and is supported by discussions on privacy and related issues. This paper is one of the first papers which discuss in detail privacy issues of honey pots and honey nets in accordance with EU law.
Keywords: EU law; data retention; honeynet; honeypot; legal issues; privacy (ID#: 15-7517)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299942&isnumber=7299862

 

Harrison, K.; Rutherford, J.R.; White, G.B., “The Honey Community: Use of Combined Organizational Data for Community Protection,” in System Sciences (HICSS), 2015 48th Hawaii International Conference on, vol., no., pp. 2288–2297, 5–8 Jan. 2015. doi:10.1109/HICSS.2015.274
Abstract: The United States has US CYBERCOM to protect the US Military Infrastructure and DHS to protect the nation’s critical cyber infrastructure. These organizations deal with wide ranging issues at a national level. This leaves local and state governments to largely fend for themselves in the cyber frontier. This paper will focus on how to determine the threat to a community and what indications and warnings can lead us to suspect an attack is underway. To try and help answer these questions we utilized the concepts of Honey pots and Honey nets and extended them to a multi-organization concept within a geographic boundary to form a Honey Community. The initial phase of the research done in support of this paper was to create a fictitious community with various components to entice would-be attackers and determine if the use of multiple sectors in a community would aid in the determination of an attack.
Keywords: critical infrastructures; organizational aspects; security of data; DHS; US CYBERCOM; US military infrastructure; United States; combined organizational data; community protection; critical cyber infrastructure; cyber frontier; fictitious community; geographic boundary; honey community; honeynets; honeypots; multiorganization concept; would-be attackers; Cities and towns; Communities; Government; Monitoring; Ports (Computers); Security; Cyberdefense; Honey Community; Honey Net; Honey Pot (ID#: 15-7518)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070088&isnumber=7069647


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Host-based IDS 2015

 

 
SoS Logo

Host-based IDS

2015


The research presented here on host-based intrusion detection systems addresses semantic approaches, power grid substation protection, an architecture for modular mobile IDS, and a hypervisor-based system. Host-based systems are of relevance to the Science of Security topics of cyber physical systems, privacy, resilience, and human behavior. All works cited were presented in 2015.


Mamalakis, G.; Diou, C.; Symeonidis, A.L., “Analysing Behaviours for Intrusion Detection,” in Communication Workshop (ICCW), 2015 IEEE International Conference on, vol., no., pp. 2645–2651, 8–12 June 2015. doi:10.1109/ICCW.2015.7247578
Abstract: In this work, a Behaviour-based Intrusion Detection Model is suggested. The proposed model can be employed from a single host configuration to a distributed mixture of host-based and network-based Intrusion Detection Systems (IDSs). Unlike most state-of-the-art IDSs that rely on analysing lower-level, raw-data representations, our proposed architecture suggests to use higher-level notions--behaviours--instead; this way, the IDS is able to identify more sophisticated attacks. To assess our premise, a Behaviour-based IDS (BIDS) prototype has been designed and developed that scans file system data to identify attacks. BIDS achieves high detection rates with low corresponding false positive rates, superseding other state-of-the-art file system IDSs.
Keywords: data structures; security of data; BIDS prototype; behaviour-based IDS prototype; detection rates; false positive rates; file system data; host-based IDS; network-based intrusion detection systems; raw-data representations; Clustering algorithms; Computers; Engines; Feature extraction; Generators; Internet of things; Training (ID#: 15-7519)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247578&isnumber=7247062

 

Rout, Ganesh Prasad; Mohanty, Sachi Nandan, “A Hybrid Approach for Network Intrusion Detection,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 614–617, 4–6 April 2015. doi:10.1109/CSNT.2015.76
Abstract: An Intrusion detection system (IDS) monitors network traffic and system activities and report to administrator. In some cases the intrusion detection may also respond to anomalous or malicious traffic by taking action such as blocking of user or source address from accessing the network. IDS comes in variety of flavor but its goals to detecting suspicious traffic in different ways. There are network based and host based intrusion detection system. The IDS is detecting based looking for specific signature of known threats as it antivirus and firewall. The anomaly detection is used to comparing traffics against the baseline. The detection is described briefly in this paper using fuzzy and genetic algorithm.
Keywords: Biological cells; Computers; Genetic algorithms; Intrusion detection; Sociology; Statistics; Classification rules; Fuzzy logic; Genetic algorithm (ID#: 15-7520)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279991&isnumber=7279856

 

Vasudeo, S.H.; Patil, P.; Kumar, R.V., “IMMIX-Intrusion Detection and Prevention System,” in Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, vol., no., pp. 96–101, 6–8 May 2015. doi:10.1109/ICSTM.2015.7225396
Abstract: Computer security has become a major problem in our society. Specifically, computer network security is concerned with preventing the intrusion of an unauthorized person into a network of computers. An intrusion detection system (IDS) is a tool to monitor the network traffic and users activity with the aim of distinguishing between hostile and non-hostile traffic. Most of current networks implement Misuse detection or Anomaly detection techniques for Intrusion detection. By deploying misuse based IDS it cannot detect unknown intrusions and anomaly based IDS have high false positive rate for detection. To overcome this, proposed system uses combination of both network based and host based IDPS as Hybrid Intrusion Detection and Prevention System which will be helpful for detecting maximum attacks on networks.
Keywords: computer network security; telecommunication traffic; IDS; IMMIX-intrusion detection system; IMMIX-intrusion prevention system; anomaly detection; misuse detection; non-hostile traffic; Classification algorithms; Clustering algorithms; Computers; Intrusion detection; Machine learning algorithms; Monitoring; anomaly based; attacks; classification; intrusion detection; intrusion prevention; misuse based (ID#: 15-7521)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225396&isnumber=7225373

 

Feroz, M.N.; Mengel, S., “Phishing URL Detection Using URL Ranking,” in Big Data (BigData Congress), 2015 IEEE International Congress on, vol., no., pp. 635–638, June 27 2015–July 2 2015. doi:10.1109/BigDataCongress.2015.97
Abstract: The openness of the Web exposes opportunities for criminals to upload malicious content. In fact, despite extensive research, email based spam filtering techniques are unable to protect other web services. Therefore, a counter measure must be taken that generalizes across web services to protect the user from phishing host URLs. This paper describes an approach that classifies URLs automatically based on their lexical and host-based features. Clustering is performed on the entire dataset and a cluster ID (or label) is derived for each URL, which in turn is used as a predictive feature by the classification system. Online URL reputation services are used in order to categorize URLs and the categories returned are used as a supplemental source of information that would enable the system to rank URLs. The classifier achieves 93-98% accuracy by detecting a large number of phishing hosts, while maintaining a modest false positive rate. URL clustering, URL classification, and URL categorization mechanisms work in conjunction to give URLs a rank.
Keywords: Web services; Web sites; computer crime; information filtering; pattern classification; pattern clustering; unsolicited e-mail; URL categorization mechanism; URL classification; URL ranking; cluster ID; clustering; email based spam filtering technique; host-based feature; lexical feature; malicious content; online URL reputation service; phishing URL detection; phishing host URL; predictive feature; Accuracy; Classification algorithms; Clustering algorithms; Feature extraction; Security; Servers; Uniform resource locators; Classification; Clustering; Feature Vector; URL Ranking; Web Categorization (ID#: 15-7522)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7207281&isnumber=7207183

 

Zaidi, K.; Milojevic, M.; Rakocevic, V.; Nallanathan, A.; Rajarajan, M., “Host Based Intrusion Detection for VANETs: A Statistical Approach to Rogue Node Detection,” in Vehicular Technology, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, October 2015. doi:10.1109/TVT.2015.2480244
Abstract: In this work, an Intrusion Detection System (IDS) for vehicular ad hoc networks (VANETs) is proposed and evaluated. The IDS is evaluated by simulation in presence of rogue nodes that can launch different attacks. The proposed IDS is capable of detecting a false information attack using statistical techniques effectively and can also detect other types of attacks. First, the theory and implementation of the VANET model that is used to train the IDS is discussed. Then an extensive simulation and analysis of our model under different traffic conditions is conducted to identify the effects of these parameters in VANETs. In addition, the extensive data gathered in the simulations is presented using graphical and statistical techniques. Moreover, rogue nodes are introduced in the network and an algorithm is presented to detect these rogue nodes. Finally, we evaluate our system and observe that the proposed application layer IDS based on cooperative information exchange mechanism is better for dynamic and fast moving networks such as VANETs as compared to other techniques available.
Keywords: Accidents; Ad hoc networks; Cryptography; Data models; Intrusion detection; Mathematical model; Vehicles; Intrusion Detection; Security; VANETs; cryptography; fault tolerance; rogue nodes; vehicular networks; wireless networks (ID#: 15-7523)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272127&isnumber=4356907


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Resilience Indicators 2015

 

 
SoS Logo

Resilience Indicators

2015


Resilience is an important security element in cyber physical systems. The works cited here address the problem of identifying what constitutes resilience, how it can be identified and quantified, and general methods for addressing the problem. These works were presented in 2014 and 2015.



Nieuborg, A.; Koelle, R.; Gluchshenko, O.; Foerster, P., “Towards Evaluating Air Navigation Performance for Estimation of Its Resilience,” in Integrated Communications, Navigation and Surveillance Conference (ICNS), 2014, vol., no., pp. L4-1–L4-10, 8–10 April 2014. doi:10.1109/ICNSurv.2014.6820000
Abstract: This paper addresses operational performance management within air navigation and considers an initial approach to modelling resilience within air navigation and applies it to the current performance framework of performance of air navigation services at European airports. Resilience is an emerging paradigm and has gained the attention of political decision makers and operational planners. In particular, the distinction between nominal and non-nominal situations is closely linked with the identification of disruptions and associated impacts on the quality of service. This paper considers resilience from a system-theoretic perspective. From that perspective, operational performance can be modelled as a situation management problem and resilience aspects can be discussed on the basis of changes of system state within the state space over the considered time horizon. For the purpose of this paper, service levels of air navigation within the airport context are conceptualized on the basis of the performance indicators set forth by the European performance scheme. This exploratory study aims at identifying the fit of the existing scheme to identify and categorize disrupted services. The model is based on a case study analysis of disruptions at two European airports. The case study approach allows to derive an initial set of parameters. The results obtained provide evidence for the conceptual feasibility of addressing resilience from a situation management and state-oriented perspective. Initial requirements can be derived for the adaptation of the current set of performance indicators for air navigation within the airport context and inform further refinement of the performance framework.
Keywords: aircraft navigation; decision making; intelligent transportation systems; quality of service; European airports; European performance scheme; ITS resilience estimation; air navigation performance evaluation; air navigation service; associated impact identification; disrupted service categorisation; disruption identification; modelling resilience; operational performance management; operational performance modelling; operational planner; performance indicators; political decision making; quality of service; situation management problem; state space; time horizon; Airports; Atmospheric modeling; Context; Europe; Navigation; Resilience; System performance  (ID#: 15-7258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820000&isnumber=6819972

 

Hardy, T.L., “Resilience: A Holistic Safety Approach,” in Reliability and Maintainability Symposium (RAMS), 2014 Annual, vol., no., pp. 1-6, 27-30 Jan. 2014. doi:10.1109/RAMS.2014.6798494
Abstract: Decreasing the potential for catastrophic consequences poses a significant challenge for high-risk industries. Organizations are under many different pressures, and they are continuously trying to adapt to changing conditions and recover from disturbances and stresses that can arise from both normal operations and unexpected events. Reducing risks in complex systems therefore requires that organizations develop and enhance traits that increase resilience. Resilience provides a holistic approach to safety, emphasizing the creation of organizations and systems that are proactive, interactive, reactive, and adaptive. This approach relies on disciplines such as system safety and emergency management, but also requires that organizations develop indicators and ways of knowing when an emergency is imminent. A resilient organization must be adaptive, using hands-on activities and lessons learned efforts to better prepare it to respond to future disruptions. It is evident from the discussions of each of the traits of resilience, including their limitations, that there are no easy answers to reducing safety risks in complex systems. However, efforts to strengthen resilience may help organizations better address the challenges associated with the ever-increasing complexities of their systems.
Keywords: emergency management; large-scale systems; reliability; risk management; safety; system recovery; complex systems; high-risk industries; holistic safety approach; resilience; system risk reduction; system safety; Accidents; Hazards; Organizations; Personnel; Resilience; Systematics (ID#: 15-7259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798494&isnumber=6798433

 

Schneider, J.; Romanowski, C.; Raj, R.K.; Mishra, S.; Stein, K., “Measurement of Locality Specific Resilience,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community’s operational capability to respond to stressors. As each locality is different—with unique risks, strengths, and weaknesses—the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 15-7260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225332&isnumber=7190491

 

Chiaradonna, S.; Di Giandomenico, F.; Murru, N., “On a Modeling Approach to Analyze Resilience of a Smart Grid Infrastructure,” in Dependable Computing Conference (EDCC), 2014 Tenth European, vol., no., pp.166–177, 13–16 May 2014. doi:10.1109/EDCC.2014.34
Abstract: The evolution of electrical grids, both in terms of enhanced ICT functionalities to improve efficiency, reliability and economics, as well as the increasing penetration of renewable redistributed energy resources, results in a more sophisticated electrical infrastructure which poses new challenges from several perspectives, including resilience and quality of service analysis. In addition, the presence of interdependencies, which more and more characterize critical infrastructures (including the power sector), exacerbates the need for advanced analysis approaches, to be possibly employed since the early phases of the system design, to identify vulnerabilities and appropriate countermeasures. In this paper, we outline an approach to model and analyze smart grids and discuss the major challenges to be addressed in stochastic model-based analysis to account for the peculiarities of the involved system elements. Representation of dynamic and flexible behavior of generators and loads, as well as representation of the complex ICT control functions required to preserve and/or re-establish electrical equilibrium in presence of changes need to be faced to assess suitable indicators of the resilience and quality of service of the smart grid.
Keywords: critical infrastructures; power system control; power system reliability; smart power grids; stochastic processes; ICT control functions; ICT functionalities; critical infrastructure; electrical equilibrium; power system economics; power system efficiency; quality of service; renewable redistributed energy resource; smart grid infrastructure resilience; stochastic model based analysis; Analytical models; Generators; Load modeling; Low voltage; Smart grids; Substations; Voltage control; Electrical Smart Grid; Interdependencies; Modeling Framework; SAN Formalism; Stochastic Process (ID#: 15-7261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821102&isnumber=6821069

 

Wong, E., “Resilience in Next Generation Access Networks: Assessment of Survivable TWDM-PONs,” in Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 4166–4172, 10–14 June 2014. doi:10.1109/ICC.2014.6883974
Abstract: In addressing the requirements of next-generation passive optical networks, the time and wavelength division multiplexed PON (TWDM-PON) has been selected as the next technology solution beyond 10 Gbps PONs. Due to the increased network reach and customer base, many of which are business customers, rapid fault detection and subsequent restoration of services are critical. Fault protection for conventional PONs has previously been extensively explored. Application of these existing schemes is however inappropriate for TWDM-PONs as an increased network reach and customer base necessitate highly sensitive monitoring modules for fiber/device fault detection. The existing use of upstream transmissions as a loss of signal (LOS) indicator at the central office is also unsuitable due to the sleep/doze mode nature of the optical network units. Here, survivable TWDM-PON architectures which combine rapid fault detection and protection switching to provide resilience are proposed. These architectures do not rely on upstream transmissions for LOS activation. Each exploits highly-sensitive monitoring modules with fast-response fault detection and subsequent protection switching and requiring only very low levels of monitoring input power. The maximum achievable network reach and split ratio, and the survivability of all three schemes are analyzed and compared.
Keywords: fault diagnosis; next generation networks; optical fibre subscriber loops; telecommunication network reliability; time division multiplexing; wavelength division multiplexing; fault detection; fault protection; next generation access networks; optical network units; passive optical networks; sleep-doze mode; survivable TWDM-PON architectures; Monitoring; Optical fiber couplers; Optical fiber devices; Optical network units; Propagation losses; Network restoration; protection switching; time and wavelength multiplexed passive optical network (ID#: 15-7262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883974&isnumber=6883277


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Resilience Metrics 2015

 

 
SoS Logo

Resilience Metrics

2015


Quantitative measurement is a key to sharing understanding of resilience in cyber physical systems. The work cited here looks at the development of predictive and analytical metrics to help achieve this end. The work was presented in 2015.



Alenazi, M.J.F.; Sterbenz, J.P.G., “Comprehensive Comparison and Accuracy of Graph Metrics in Predicting Network Resilience,” in Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, vol., no.,
pp. 157–164, 24–27 March 2015. doi:10.1109/DRCN.2015.7149007
Abstract: Graph robustness metrics have been used largely to study the behavior of communication networks in the presence of targeted attacks and random failures. Several researchers have proposed new graph metrics to better predict network resilience and survivability against such attacks. Most of these metrics have been compared to a few established graph metrics for evaluating the effectiveness of measuring network resilience. In this paper, we perform a comprehensive comparison of the most commonly used graph robustness metrics. First, we show how each metric is determined and calculate its values for baseline graphs. Using several types of random graphs, we study the accuracy of each robustness metric in predicting network resilience against centrality-based attacks. The results show three conclusions. First, our path diversity metric has the highest accuracy in predicting network resilience for structured baseline graphs. Second, the variance of node-betweenness centrality has mostly the best accuracy in predicting network resilience for Waxman random graphs. Third, path diversity, network criticality, and effective graph resistance have high accuracy in measuring network resilience for Gabriel graphs.
Keywords: graph theory; telecommunication network reliability; telecommunication security; Gabriel graphs; Waxman random graphs; baseline graphs; centrality-based attacks; communication network behavior; comprehensive comparison; effective graph resistance; graph robustness metrics accuracy; network criticality; network resilience measurement; network resilience prediction; node-betweenness centrality variance; path diversity metric; random failures; survivability prediction; targeted attacks; Accuracy; Communication networks; Joining processes; Measurement; Resilience; Robustness; Connectivity evaluation; Fault tolerance; Graph robustness; Graph spectra; Network design; Network resilience; Network science; Reliability; Survivability (ID#: 15-7263)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149007&isnumber=7148972

 

Schneider, J.; Romanowski, C.; Raj, R.K.; Mishra, S.; Stein, K., “Measurement of Locality Specific Resilience,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community’s operational capability to respond to stressors. As each locality is different—with unique risks, strengths, and weaknesses—the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 15-7264)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225332&isnumber=7190491

 

Eshghi, K.; Johnson, B.K.; Rieger, C.G., “Power System Protection and Resilient Metrics,” in Resilience Week (RWS), 2015, vol., no., pp. 1–8, 18–20 Aug. 2015. doi:10.1109/RWEEK.2015.7287448
Abstract: During a real-time power system event, a system operator needs to conservatively reduce operating limits while the changing system conditions are analyzed. The time it takes to develop new operating limits could affect millions of transmission system users, especially if this event is classified by NERC as a Category D type event (extreme events resulting in the loss of two or more bulk electric system elements). Controls for the future grid must be able to perform real-time analysis, identify new reliability risks, and set new SOLs (System Operating Limit) for real-time operations. In this paper we are developing “Resilience Metrics” requirements that describe how systems operate at an acceptable level of normalcy despite disturbances or threats. We consider the interdependencies inherent in critical infrastructure systems and discuss some distributed resilience metrics that can be in current supervisory control and data acquisition (SCADA) to provide a level of state awareness. This level of awareness provides knowledge that can be used to characterize and reduce the risk of cascading events. A “resilience power system agent” is proposed that provides attributes to measure and perform this metrics.
Keywords: Control systems; Measurement; Power system stability; Resilience; Stability analysis; Transient analysis (ID#: 15-7265)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287448&isnumber=7287407

 

Yabin Ye; Arribas, F.J..; Elmirghani, J.; Idzikowski, F.; Vizcaino, J.L.; Monti, P.; Musumeci, F.; Pattavina, A.; Van Heddeghem, W., “Energy-Efficient Resilient Optical Networks: Challenges and Trade-Offs,” in Communications Magazine, IEEE, vol. 53, no. 2, pp. 144–150, Feb. 2015. doi:10.1109/MCOM.2015.7045403
Abstract: Energy efficiency and resilience are two well established research topics in optical transport networks. However, their overall objectives (i.e., power minimization and resource utilization/availability maximization) conflict. In fact, provisioning schemes optimized for best resilience performance are in most cases not energy-efficient in their operations, and vice versa. However, very few works in the literature consider the interesting issues that may arise when energy efficiency and resilience are combined in the same networking solution. The objective of this article is to identify a number of research challenges and trade-offs for the design of energy-efficient and resilient optical transport networks from the perspective of long-term traffic forecasts, short-term traffic dynamics, and service level agreement requirements. We support the challenges with justifying numbers based on lessons learned from our previous work. The article also discusses suitable metrics for energy efficiency and resilience evaluation, in addition to a number of steps that need to be taken at the standardization level to incorporate energy efficiency into already existing and well established protocols.
Keywords: optical fibre networks; standardisation; telecommunication power management; telecommunication traffic; availability maximization; energy efficiency; energy-efficient resilient optical transport networks; long-term traffic forecasts; power minimization; resilience evaluation; resource utilization; service level agreement requirements; short-term traffic dynamics; standardization level; Energy consumption; Energy efficiency; Optical fiber networks; Optical fibers; Optical transmitters (ID#: 15-7266)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045403&isnumber=7045380

 

Backhaus, S.; Swift, G.W., “DOE DC Microgrid Scoping Study — Opportunities and Challenges,” in DC Microgrids (ICDCM), 2015 IEEE First International Conference on, vol., no., pp. 43–44, 7–10 June 2015. doi:10.1109/ICDCM.2015.7152007
Abstract: For the Department of Energy, several national labs (Los Alamos, Lawrence Berkeley, Oakridge, Sandia, Argonne, and Pacific Northwest) collaborated on a scoping study1 to provide a preliminary examination of the benefits and drawbacks of potential DC microgrid applications relative to their AC counterparts. The performance of notional AC and DC microgrids are estimated and compared using several metrics: safety and protection, reliability, capital cost, energy efficiency, operating cost, engineering costs, environmental impact, power quality, and resilience. The initial comparison is done using several generic microgrid architectures (see Fig. 1) to reveal the importance of the different metrics. Then, these metrics were compared for several specific microgrid applications to draw out possible unique advantages of DC microgrids. In this manuscript, we focus on the comparison using the generic architectures in Fig.1. The draft report provides recommendations for potential future research and deployment activities. The draft report provides recommendations for potential future research and deployment activities.
Keywords: distributed power generation; energy conservation; power generation protection; power generation reliability; power supply quality; DOE DC microgrid reliability; department of energy; energy efficiency; microgrid protection; power quality; Energy efficiency; Measurement; Microgrids; Power electronics; Power system reliability; Reliability (ID#: 15-7267)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152007&isnumber=7151990

 

Kumar, N.; Misra, S.; Chilamkurti, N.; Lee, J.H.; Rodrigues, J.J.P.C., “Bayesian Coalition Negotiation Game as a Utility for Secure Energy Management in a Vehicles-to-Grid Environment,” in Dependable and Secure Computing, IEEE Transactions on, vol. 13, no. 1, pp. 133–145, Jan.–Feb.1 2016. doi:10.1109/TDSC.2015.2415489
Abstract: In recent times, Plug-in Electric Vehicles (PEVs) have emerged as a new alternative to increase an efficiency of smart grids (SGs) in a vehicles-to-grid (V2G) environment. The V2G environment provides a bidirectional power and information flow, so that users can have an optimized usage as per their requirements. However, uncontrolled and unmanaged power distribution may lead to an overall performance degradation in the V2G environment. One reason for this uncontrolled and unmanaged flow may be due to the usage of power by unauthorized users. To address this issue, we propose a Bayesian Coalition Negotiation Game (BCNG) as a utility for secure energy management for PEVs in the V2G environment. We have used a BCNG along with Learning Automata (LA), wherein LA are stationed on PEVs and are assumed as the players in the game. To provide an approach based on resilience for any misuse of electricity consumption, a new Secure Payoff Function (SPF) is proposed. The players take actions and update their action probability vector using SPF. Nash Equilibrium (NE) is also achieved in the game using convergence theory. Our proposal is evaluated with various metrics. The proposed scheme also provides mutual authentication and resilience against various attacks during power distribution.
Keywords: Automata; Bayes methods; Equations; Games; Learning automata; Vectors; Vehicles; Bayesian Coalition; Learning Automata; Plug-in Electric Vehicles (ID#: 15-7268)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7064718&isnumber=4358699

 

Klöti, R.; Kotronis, V.; Ager, B.; Dimitropoulos, X., “Policy-Compliant Path Diversity and Bisection Bandwidth,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 675–683, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218436
Abstract: How many links can be cut before a network is bisected? What is the maximal bandwidth that can be pushed between two nodes of a network? These questions are closely related to network resilience, path choice for multipath routing or bisection bandwidth estimations in data centers. The answer is quantified using metrics such as the number of edge-disjoint paths between two network nodes and the cumulative bandwidth that can flow over these paths. In practice though, such calculations are far from simple due to the restrictive effect of network policies on path selection. Policies are set by network administrators to conform to service level agreements, protect valuable resources or optimize network performance. In this work, we introduce a general methodology for estimating lower and upper bounds for the policy-compliant path diversity and bisection bandwidth between two nodes of a network, effectively quantifying the effect of policies on these metrics. Exact values can be obtained if certain conditions hold. The approach is based on regular languages and can be applied in a variety of use cases.
Keywords: channel estimation; computer network reliability; telecommunication network routing; bisection bandwidth estimations; data center; edge disjoint paths; multipath routing; network policies; network resiliency; policy compliant path diversity; Approximation methods; Automata; Bandwidth; Internet; Routing; Tensile stress; Transforms (ID#: 15-7269)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218436&isnumber=7218353

 

Poslad, S.; Middleton, S.E.; Chaves, F.; Ran Tao; Necmioglu, O.; Bügel, U., “A Semantic IoT Early Warning System for Natural Environment Crisis Management,” in Emerging Topics in Computing, IEEE Transactions on, vol. 3, no. 2, pp. 246–257, June 2015. doi:10.1109/TETC.2015.2432742
Abstract: An early warning system (EWS) is a core type of data driven Internet of Things (IoTs) system used for environment disaster risk and effect management. The potential benefits of using a semantic-type EWS include easier sensor and data source plug-and-play, simpler, richer, and more dynamic metadata-driven data analysis and easier service interoperability and orchestration. The challenges faced during practical deployments of semantic EWSs are the need for scalable time-sensitive data exchange and processing (especially involving heterogeneous data sources) and the need for resilience to changing ICT resource constraints in crisis zones. We present a novel IoT EWS system framework that addresses these challenges, based upon a multisemantic representation model. We use lightweight semantics for metadata to enhance rich sensor data acquisition. We use heavyweight semantics for top level W3C Web Ontology Language ontology models describing multileveled knowledge-bases and semantically driven decision support and workflow orchestration. This approach is validated through determining both system related metrics and a case study involving an advanced prototype system of the semantic EWS, integrated with a deployed EWS infrastructure.
Keywords: Internet of Things; emergency management; ontologies (artificial intelligence); semantic Web; ICT resource constraints; W3C Web ontology language; data exchange; data processing; data source plug-and-play; environment disaster risk and effect management; information and communication technology; meta data-driven data analysis; multisemantic representation model; natural environment crisis management; ontology models; semantic IoT early warning system; semantic-type EWS; semantically driven decision support; sensor plug-and-play; service interoperability; service orchestration; workflow orchestration; Data models; Data processing; Hazards; Method of moments; Ontologies; Semantics; Tsunami; Crisis Management; Early Warning System; Early warning system; Resilience; Time-critical; crisis management; resilience; scalable; semantic Web; time-critical (ID#: 15-7270)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7109842&isnumber=7118282

 

Bulbul, R.; Sapkota, P.; C.-W. Ten; L. Wang; Ginter, A., “Intrusion Evaluation of Communication Network Architectures for Power Substations,” in Power Delivery, IEEE Transactions on, vol. 30, no. 3, pp. 1372–1382, June 2015. doi:10.1109/TPWRD.2015.2409887
Abstract: Electronic elements of a substation control system have been recognized as critical cyberassets due to the increased complexity of the automation system that is further integrated with physical facilities. Since this can be executed by unauthorized users, the security investment of cybersystems remains one of the most important factors for substation planning and maintenance. As a result of these integrated systems, intrusion attacks can impact operations. This work systematically investigates the intrusion resilience of the ten architectures between a substation network and others. In this paper, two network architectures comparing computer-based boundary protection and firewall-dedicated virtual local-area networks are detailed, that is, architectures one and ten. A comparison on the remaining eight architecture models was performed. Mean time to compromise is used to determine the system operational period. Simulation cases have been set up with the metrics based on different levels of attackers’ strength. These results as well as sensitivity analysis show that implementing certain architectures would enhance substation network security.
Keywords: firewalls; investment; local area networks; maintenance engineering; power system planning; safety systems; substation automation; substation protection; automation system; communication network architectures; computer-based boundary protection; cybersystems; electronic elements; firewall-dedicated virtual local-area networks; intrusion attacks; intrusion evaluation; intrusion resilience; power substations; security investment; sensitivity analysis; substation control system; substation maintenance; substation network security; substation planning; unauthorized users; Computer architecture; Modems; Protocols; Security; Servers; Substations; Tin; Cyberinfrastructure; electronic intrusion; network security planning; power substation (ID#: 15-7271)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054545&isnumber=7110680

 

Guner, S.; Selvi, H.; Gür, G.; Alagöz, F., “Controller Placement in Software-Defined Mobile Networks,” in Signal Processing and Communications Applications Conference (SIU), 2015 23rd, vol., no., pp. 2619–2622, 16–19 May 2015. doi:10.1109/SIU.2015.7130425
Abstract: In this paper, important aspects of the controller placement problem (CPP) in Software Defined Mobile Networks (SDMN) are discussed. To find an efficient and optimal controller placement, we must clarify how many controllers we need, where we place them in topology, and how they interact with each other. We take into consideration reliability, latency, resilience, and scalability metrics to answer related questions.
Keywords: controllers; mobile communication; software defined networking; controller placement; software-defined mobile networks; Conferences; IEEE standards; Mobile communication; Mobile computing; Network topology; Reliability; Software;
number of controllers; placement algorithms (ID#: 15-7272)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130425&isnumber=7129794

 

Mittal, S.; Vetter, J.S., “A Survey of Techniques for Modeling and Improving Reliability of Computing Systems,” in Parallel and Distributed Systems, IEEE Transactions on, vol. 27, no. 4, pp.1226–1238, April 1 2016. doi:10.1109/TPDS.2015.2426179
Abstract: Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made ‘reliability’ a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. This paper provides a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory (NVM), GPUs and
3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based on their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. We believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.
Keywords: Circuit faults; Computational modeling; Integrated circuit reliability; Measurement; Nonvolatile memory; Registers; Review; architectural techniques; architectural vulnerability factor; classification; fault-tolerance; reliability; resilience; soft/transient error; vulnerability (ID#: 15-7273)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7094277&isnumber=4359390

 

Chung, Chun-Jen; Xing, Tianyi; Huang, Dijiang; Medhi, Deep; Trivedi, Kishor, “SeReNe: On Establishing Secure and Resilient Networking Services for an SDN-based Multi-tenant Datacenter Environment,” in Dependable Systems and Networks Workshops (DSN-W), 2015 IEEE International Conference on, vol., no., pp. 4–11, 22–25 June 2015. doi:10.1109/DSN-W.2015.25
Abstract: In the current enterprise data enter networking environment, a major hurdle in the development of network security is the lack of an orchestrated and resilient defensive mechanism that uses well-established quantifiable metrics, models, and evaluation methods. In this position paper, we describe an emerging Secure and Resilient Networking (SeReNe) service model to establish a programmable and dynamic defensive mechanism that can adjust the system’s networking resources such as topology, bandwidth allocation, and traffic/flow forwarding policies, according to the network security situations. We posit that this requires addressing two interdependent technical areas: (a) a Moving Target Defense (MTD) framework both at networking and software levels, and (b) an Adaptive Security-enabled Traffic Engineering (ASeTE) approach to select optimal countermeasures by considering the effectiveness of countermeasures and network bandwidth allocations while minimizing the intrusiveness to the applications and the cost of deploying the countermeasures. We believe that our position can greatly benefit the virtual networking system established in data Centerior enterprise virtual networking systems that have adopted latest Open Flow technologies.
Keywords: Bridges; Cloud computing; Computational modeling; Computer bugs; Home appliances; Security; multi-tenant datacenter; security and resilience (ID#: 15-7274)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272544&isnumber=7272533

 

Amarù, L.; Gaillardon, P.-E.; De Micheli, G., “Boolean Logic Optimization in Majority-Inverter Graphs,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 7-11 June 2015. doi:10.1145/2744769.2744806
Abstract: We present a Boolean logic optimization framework based on Majority-Inverter Graph (MIG). An MIG is a directed acyclic graph consisting of three-input majority nodes and regular/complemented edges. Current MIG optimization is supported by a consistent algebraic framework. However, when algebraic methods cannot improve a result quality, stronger Boolean methods are needed to attain further optimization. For this purpose, we propose MIG Boolean methods exploiting the error masking property of majority operators. Our MIG Boolean methods insert logic errors that strongly simplify an MIG while being successively masked by the voting nature of majority nodes. Thanks to the data-structure/methodology fitness, our MIG Boolean methods run in principle as fast as algebraic counterparts. Experiments show that our Boolean methodology combined with state-of-art MIG algebraic techniques enable superior optimization quality. For example, when targeting depth reduction, our MIG optimizer transforms a ripple carry adder into a carry look-ahead one. Considering the set of IWLS’05 (arithmetic intensive) benchmarks, our MIG optimizer reduces by 17.98% (26.69%) the logic network depth while also enhancing size and power activity metrics, with respect to ABC academic optimizer. Without MIG Boolean methods, i.e., using MIG algebraic optimization alone, the previous gains are halved. Employed as front-end to a delay-critical 22-nm ASIC flow (logic synthesis + physical design) our MIG optimizer reduces the average delay/area/power by (15.07%, 4.93%, 1.93%), over 27 academic and industrial benchmarks, as compared to a leading commercial ASIC flow.
Keywords: Boolean functions; directed graphs; optimisation; Boolean logic optimization framework; MIG Boolean methods; MIG optimization; consistent algebraic framework; data-structure; directed acyclic graph; majority-inverter graphs; methodology fitness; three-input majority nodes; Adders; Application specific integrated circuits; Benchmark testing; Hardware design languages; Measurement; Optimization; Resilience; Boolean Optimization; Logic Synthesis; Majority Logic (ID#: 15-7275)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167312&isnumber=7167177

 

Soule, N.; Simidchieva, B.; Yaman, F.; Watro, R.; Loyall, J.; Atighetchi, M.; Carvalho, M.; Last, D.; Myers, D.; Flatley, B., “Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses,” in Resilience Week (RWS), 2015 , vol., no., pp.1–6, 18–20 Aug. 2015. doi:10.1109/RWEEK.2015.7287449
Abstract: The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Keywords: Analytical models; Computational modeling; IP networks; Measurement; Minimization; Security; Surface treatment; cyber security analysis; modeling; threat assessment (ID#: 15-7276)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287449&isnumber=7287407

 

Pinnaka, S.; Yarlagadda, R.; Çetinkaya, E.K., “Modelling Robustness of Critical Infrastructure Networks,” in Design of
Reliable Communication Networks (DRCN), 2015 11th International Conference on the
, vol., no., pp. 95–98, 24–27 March 2015. doi:10.1109/DRCN.2015.7148995
Abstract: Critical infrastructure networks are becoming increasingly interdependent. An attack or disaster in a network or on a single node in a network will affect the other networks dependent on it. Therefore, it is important to assess and understand the vulnerability of interdependent networks in the presence of natural disasters and malicious attacks that lead to cascading failures. We develop a framework to analyse the robustness of interdependent networks. Nodes and links in the interdependent networks are attacked based on the graph centrality metrics. We apply our framework on critical infrastructure network data. Our results indicate that the importance of critical infrastructure varies depending on the attack strategy.
Keywords: critical infrastructures; graph theory; critical infrastructure networks; graph centrality metrics; interdependent networks; malicious attacks; natural disasters; Measurement; Power system faults; Power system protection; Reliability engineering; Robustness; Transportation; cascading failures; centrality; critical infrastructure; directed graph; resilience; robustness
(ID#: 15-7277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148995&isnumber=7148972

 

Lange, S.; Gebert, S.; Zinner, T.; Tran-Gia, P.; Hock, D.; Jarschel, M.; Hoffmann, M., “Heuristic Approaches to the Controller Placement Problem in Large Scale SDN Networks,” in Network and Service Management, IEEE Transactions on, vol. 12, no.1, pp. 4–17, March 2015. doi:10.1109/TNSM.2015.2402432
Abstract: Software Defined Networking (SDN) marks a paradigm shift towards an externalized and logically centralized network control plane. A particularly important task in SDN architectures is that of controller placement, i.e., the positioning of a limited number of resources within a network to meet various requirements. These requirements range from latency constraints to failure tolerance and load balancing. In most scenarios, at least some of these objectives are competing, thus no single best placement is available and decision makers need to find a balanced trade-off. This work presents POCO, a framework for Pareto-based Optimal COntroller placement that provides operators with Pareto optimal placements with respect to different performance metrics. In its default configuration, POCO performs an exhaustive evaluation of all possible placements. While this is practically feasible for small and medium sized networks, realistic time and resource constraints call for an alternative in the context of large scale networks or dynamic networks whose properties change over time. For these scenarios, the POCO toolset is extended by a heuristic approach that is less accurate, but yields faster computation times. An evaluation of this heuristic is performed on a collection of real world network topologies from the Internet Topology Zoo. Utilizing a measure for quantifying the error introduced by the heuristic approach allows an analysis of the resulting trade-off between time and accuracy. Additionally, the proposed methods can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).
Keywords: Internet; Pareto optimisation; optimal control; software defined networking; telecommunication network topology; Internet topology zoo; NFV; SDN architectures; centralized network control plane; controller placement problem; decision makers; failure tolerance; large scale SDN networks; load balancing; network functions virtualization; pareto based optimal controller placement; small and medium sized networks; software defined networking; Context; Equations; Graphical user interfaces; Mathematical model; Measurement; Optimization; Resilience; Controller Placement; OpenFlow; POCO; SDN; controller placement; failure tolerance; latency; multiobjective optimization; resilience; simulated annealing (ID#: 15-7278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7038177&isnumber=7061568

 

Chanda, Sayonsom; Srivastava, Anurag K., “Quantifying Resiliency of Smart Power Distribution Systems with Distributed Energy Resources,” in Industrial Electronics (ISIE), 2015 IEEE 24th International Symposium on, vol., no., pp. 766–771, 3–5 June 2015. doi:10.1109/ISIE.2015.7281565
Abstract: The purpose of smart grid projects worldwide is to revitalize the aging power system infrastructure, and make it more reliable, more resilient and more sustainable. Technological advances has led to diversity of power sources and lesser dependence on fossil fuels; however, it has also increased the complexity of control of the network, which may have a counter-effect on its resiliency and reliability. Also, weather induced power disruptions or targeted attacks on critical power system infrastructure have increased in numbers. Thus there is a need for formal metrics to quantify resiliency of the different distribution system, or different configurations of same network. This paper presents definitions of resiliency of power distribution system, and approach towards resilient design of future power distribution systems with distributed energy resources. These are eventually used to identify parameters for quantification of resiliency. Simulation results for several test cases have been presented to validate the developed resiliency metrics.
Keywords: Measurement; Meteorology; Microgrids; Power system reliability; Reliability; Resilience (ID#: 15-7279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281565&isnumber=7281431

 

Lew, R.; Boring, R.L.; Ulrich, T.A., “A Tool for Assessing the Text Legibility of Digital Human Machine Interfaces,” in Resilience Week (RWS), 2015, vol., no., pp. 1–5, 18–20 Aug. 2015. doi:10.1109/RWEEK.2015.7287437
Abstract: A tool intended to aid qualified professionals in the assessment of the legibility of text presented on a digital display is described. The assessment of legibility is primarily for the purposes of designing and analyzing human machine interfaces in accordance with NUREG-0700 and MIL-STD 1472G. The tool addresses shortcomings of existing guidelines by providing more accurate metrics of text legibility with greater sensitivity to design alternatives.
Keywords: Ergonomics; Guidelines; Sociology; Standards; Statistics; Testing; Workstations; Human Factors; Human Machine Interface; Text Legibility (ID#: 15-7280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287437&isnumber=7287407

 

Lei Sun; Wenye Wang; Zhuo Lu, “On Topology and Resilience of Large-Scale Cognitive Radio Networks Under Generic Failures,” in Wireless Communications, IEEE Transactions on, vol. 14, no. 6, pp. 3390–3401, June 2015. doi:10.1109/TWC.2015.2404919
Abstract: It has been demonstrated that in wireless networks, blackholes, which are typically generated by isolated node failures, and augmented by failure correlations, can easily result in devastating impact on network performance. In order to address this issue, we focus on the topology of Cognitive Radio Networks (CRNs) because of their phenomenal benefits in improving spectrum efficiency through opportunistic communications. Particularly, we first define two metrics, namely the failure occurrence probability p and failure connection function g(·), to characterize node failures and their spreading properties, respectively. Then we prove that each blackhole is exponentially bounded based on percolation theory. By mapping failure spreading using a branching process, we further derive an upper bound on the expected size of blackholes. With the observations from our analysis, we are able to find a sufficient condition for a resilient CRN in the presence of blackholes through analysis and simulations.
Keywords: cognitive radio; telecommunication network topology; blackholes; failure connection function; failure correlations; failure occurrence probability; generic failures; large-scale cognitive radio networks resilience; large-scale cognitive radio networks topology; network performance; node failures; opportunistic communications; percolation theory; wireless networks; Interference; Network topology; Routing; Routing protocols; Topology; Wireless networks; Resilience; cognitive radio networks; topology (ID#: 15-7281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7046409&isnumber=7119638


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


SQL Injections 2015

 

 
SoS Logo

SQL Injections

2015


SQL injection is used to attack data-driven applications. Malicious SQL statements are inserted into an entry field for execution to dump the database contents to the attacker. One of the most common hacker techniques, SQL injection is used to exploit a security vulnerability in an application's software. It is mostly used against websites but can be used to attack any type of SQL database. Because of its prevalence and ease of use from the hacker perspective, it is an important area for research. The articles cited here focus on prevention, detection, and testing. These works were presented in 2015.



Li Qian; Zhenyuan Zhu; Jun Hu; Shuying Liu, “Research of SQL Injection Attack and Prevention Technology,” in Estimation, Detection and Information Fusion (ICEDIF), 2015 International Conference on, vol., no., pp. 303–306, 10–11 Jan. 2015. doi:10.1109/ICEDIF.2015.7280212
Abstract: SQL injection attack is one of the most serious security vulnerabilities in Web application system, most of these vulnerabilities are caused by lack of input validation and SQL parameters use. Typical SQL injection attack and prevention technologies are introduced in the paper. The detecting methods not only validate user input, but also use type-safe SQL parameters. SQL injection defense model is established according to the detection processes, which is effective against SQL injection vulnerabilities.
Keywords: SQL injection; defence model; input validation; prevention technology (ID#: 15-7282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280212&isnumber=7280146

 

Nagpal, B.; Singh, N.; Chauhan, N.; Panesar, A., “Tool Based Implementation of SQL Injection for Penetration Testing,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 746–749, 15–16 May 2015. doi:10.1109/CCAA.2015.7148509
Abstract: Web applications are a fundamental pillar of today’s world. Society depends on them for business and day to day tasks. Because of their extensive use, Web applications are under constant attack by hackers that exploit their vulnerabilities to disrupt business and access confidential information. SQL Injection and Remote File Inclusion are the two most frequently used exploits and hackers prefer easier rather than complicated attack techniques. Every day as number of Internet users are increasing, the vulnerabilities of a system being attacked is becoming easier. SQL Injection is one of the most common attack method that is being used these days. Havij is one of the tools used to implement SQL Injection which will be discussed in this paper. Our research objective is to analyse the use of Havij in penetration testing in IT industry and to compare various SQL Injection tools available in the market.
Keywords: SQL; program testing; Havij tools; IT industry; Internet users SQL injection; Web applications; attack method; confidential information access; penetration testing; remote file inclusion; system vulnerabilities; tool based implementation; Automation; Computer Hacking; Databases; Industries;Servers; Testing; Havij; Implementation of SQL Injection; Penetration Testing; SQLInjection; Tools for SQL Injection (ID#: 15-7283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148509&isnumber=7148334

 

Hanmanthu, B.; Ram, B. Raghu; Niranjan, P., “SQL Injection Attack Prevention Based on Decision Tree Classification,” in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp. 1–5, 9–10 Jan. 2015. doi:10.1109/ISCO.2015.7282227
Abstract: In real world as dependence on World Wide Web applications increasing day by day they transformed vulnerable to security attacks. Out of all the different attacks the SQL Injection Attacks are the most common. In this paper we propose SQL injection vulnerability prevention by decision tree classification technique. The proposed model make use famous decision tree classification model to prevent the SQL injection attacks. The proposed model will filter the sent HTTP request by using a decision tree classification based attack signatures. We test our proposed model on synthetic data which given satisfactory results.
Keywords: Decision trees; Information filters; Random access memory; Robustness; Uniform resource locators; Data Mining; Decision Tree; SQL Injection Attack; Web Security (ID#: 15-7284)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282227&isnumber=7282219

 

Sonewar, P.A.; Mhetre, N.A., “A Novel Approach for Detection of SQL Injection and Cross Site Scripting Attacks,” in Pervasive Computing (ICPC), 2015 International Conference on, vol., no., pp. 1–4, 8–10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087131
Abstract: Web applications provide vast category of functionalities and usefulness. As more and more sensitive data is available over the internet hackers are becoming more interested in such data revealing which can cause massive damage. SQL injection is one of such attacks. This attack can be used to infiltrate the database of any web application that may lead to alteration of database or disclosing important information. Cross site scripting is one more attack in which attacker obfuscates the input given to the web application that may lead to changes in view of the web page. Three tier web applications can be categorized statically and dynamically for detecting and preventing these types of attacks. Mapping model in which requests are mapped on queries can be used effectively to detect such kind of attacks and prevention logic can be applied.
Keywords: Internet; SQL; Web sites; security of data; SQL injection detection; Web applications; Web page; cross site scripting attack; database infiltration; mapping model; prevention logic; Blogs; Computers; Conferences; Databases; Intrusion detection; Uniform resource locators; Cross Site Scripting (XSS); Intrusion Detection System (IDS); SQL injection attack; Tier Web Application; Web Security Vulnerability (ID#: 15-7285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087131&isnumber=7086957

 

Appelt, D.; Nguyen, C.D.; Briand, L., “Behind an Application Firewall, Are We Safe from SQL Injection Attacks?,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–10, 13–17 April 2015. doi:10.1109/ICST.2015.7102581
Abstract: Web application firewalls are an indispensable layer to protect online systems from attacks. However, the fast pace at which new kinds of attacks appear and their sophistication require that firewalls be updated and tested regularly as otherwise they will be circumvented. In this paper, we focus our research on web application firewalls and SQL injection attacks. We present a machine learning-based testing approach to detect holes in firewalls that let SQL injection attacks bypass. At the beginning, the approach can automatically generate diverse attack payloads, which can be seeded into inputs of web-based applications, and then submit them to a system that is protected by a firewall. Incrementally learning from the tests that are blocked or passed by the firewall, our approach can then select tests that exhibit characteristics associated with bypassing the firewall and mutate them to efficiently generate new bypassing attacks. In the race against cyber attacks, time is vital. Being able to learn and anticipate more attacks that can circumvent a firewall in a timely manner is very important in order to quickly fix or fine-tune the firewall. We developed a tool that implements the approach and evaluated it on ModSecurity, a widely used application firewall. The results we obtained suggest a good performance and efficiency in detecting holes in the firewall that could let SQLi attacks go undetected.
Keywords: Internet; SQL; firewalls; learning (artificial intelligence); ModSecurity; SQL injection attacks; SQLi attacks; Web application firewalls; bypassing attacks; cyber attacks; machine learning-based testing approach; online system protection; Databases; Grammar; Radio access networks; Security; Servers; Syntactics; Testing (ID#: 15-7286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102581&isnumber=7102573

 

Naderi-Afooshteh, Abbas; Nguyen-Tuong, Anh; Bagheri-Marzijarani, Mandana; Hiser, Jason D.; Davidson, Jack W., “Joza: Hybrid Taint Inference for Defeating Web Application SQL Injection Attacks,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 172–183, 22–25 June 2015. doi:10.1109/DSN.2015.13
Abstract: Despite years of research on taint-tracking techniques to detect SQL injection attacks, taint tracking is rarely used in practice because it suffers from high performance overhead, intrusive instrumentation, and other deployment issues. Taint inference techniques address these shortcomings by obviating the need to track the flow of data during program execution by inferring markings based on either the program’s input (negative taint inference), or the program itself (positive taint inference). We show that existing taint inference techniques are insecure by developing new attacks that exploit inherent weaknesses of the inferencing process. To address these exposed weaknesses, we developed Joza, a novel hybrid taint inference approach that exploits the complementary nature of negative and positive taint inference to mitigate their respective weaknesses. Our evaluation shows that Joza prevents real-world SQL injection attacks, exhibits no false positives, incurs low performance overhead (4%), and is easy to deploy.
Keywords: Approximation algorithms; Databases; Encoding; Inference algorithms; Optimization; Payloads; Security; SQL injection; Taint inference; Taint tracking; Web application security (ID#: 15-7287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266848&isnumber=7266818

 

Pramod, A.; Ghosh, A.; Mohan, A.; Shrivastava, M.; Shettar, R., “SQLI Detection System for a Safer Web Application,” in Advance Computing Conference (IACC), 2015 IEEE International on, vol., no., pp. 237–240, 12–13 June 2015. doi:10.1109/IADCC.2015.7154705
Abstract: SQL Injection (SQLI) is a quotidian phenomenon in the field of network security. It is a potent and effective way of intruding into secured databases thereby jeopardizing the confidentiality, integrity and availability of information in them. SQL Injection works by inserting malicious queries into legal queries thereby rendering it increasingly arduous for most detection systems to be able to discern its occurrence. Hence, the need of the hour is to build a coherent and a smart SQL Injection detection system to make web applications safer and thus, more reliable. Unlike a great majority of current detection tools and systems that are deployed at a region between the web server and the database server, the proposed system is deployed between client and the web server, thereby shielding the web server from the inimical impacts of the attack. This approach is nascent and efficient in terms of detection, ranking and notification of the attack designed using pattern matching algorithm based on the concept of hashing.
Keywords: Internet; SQL; computer network security; cryptography; file organisation; file servers; pattern matching; SQL Injection; SQLI detection system; Web application; Web server; database security; database server; hashing function; network security; pattern matching algorithm; Algorithm design and analysis; Databases; Inspection; Security; Time factors; Web servers; Deep Packet Inspection; Hardware Network Analyzer; SQL injection attack (ID#: 15-7288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154705&isnumber=7154658

 

Bulusu, P.; Shahriar, H.; Haddad, H.M., “Classification of Lightweight Directory Access Protocol Query Injection Attacks and Mitigation Techniques,” in Collaboration Technologies and Systems (CTS), 2015 International Conference on, vol., no.,
pp. 337–344, 1–5 June 2015. doi:10.1109/CTS.2015.7210446
Abstract: The Lightweight Directory Access Protocol (LDAP) is used in a large number of web applications, and therefore, different types of LDAP injection attacks are becoming common. These injection attacks take advantage of an application not validating inputs before being used as part of LDAP queries. An attacker can provide inputs that may result in the alteration of intended LDAP query structure. The attacks can lead to various types of security breaches including Login Bypassing, Information Disclosure, Privilege Escalation, and Information Alteration. Despite many research efforts to prevent LDAP injection attacks, many web applications remain vulnerable to such attacks. In particular, there has been little attention given to implement and test secure web applications that can mitigate LDAP query injection attacks. More attention has been given to prevent Structured Query Language (SQL) injection attacks but these mitigation techniques cannot be directly applied in order to prevent LDAP injection attacks. This work provides analysis and classification of various types of LDAP injection attacks and mitigation techniques used to prevent them, and it highlights the differences between SQL and LDAP injection attacks.
Keywords: SQL; cryptographic protocols; pattern classification; query processing; LDAP injection attacks; LDAP query injection attacks; LDAP query structure; SQL injection attacks; information alteration; information disclosure; lightweight directory access protocol mitigation techniques; lightweight directory access protocol query injection attack classification; login bypassing; privilege escalation; security breach; structured query language injection attacks; DVD; Decision support systems; LDAP injection; SQL injection; mitigation technique (ID#: 15-7289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210446&isnumber=7210375

 

Palma Salas, M.I.; Martins, E., “A Black-Box Approach to Detect Vulnerabilities in Web Services Using Penetration Testing,” in Latin America Transactions, IEEE (Revista IEEE America Latina), vol.13, no.3, pp. 707–712, March 2015. doi:10.1109/TLA.2015.7069095
Abstract: Web services work over dynamic connections among distributed systems. This technology was specifically designed to easily pass SOAP message through firewalls using open ports. These benefits involve a number of security challenges, such as Injection Attacks, phishing, Denial-of-Services (DoS) attacks, and so on. The difficulty to detect vulnerabilities, before they are exploited, encourages developers to use security testing like penetration testing to reduce the potential attacks. Given a black-box approach, this research use the penetration testing to emulate a series of attacks, such as Cross-site Scripting (XSS), Fuzzing Scan, Invalid Types, Malformed XML, SQL Injection, XPath Injection and XML Bomb. In this way, was used the soapUI vulnerability scanner in order to emulate these attacks and insert malicious scripts in the requests of the web services tested. Furthermore, was developed a set of rules to analyze the responses in order to reduce false positives and negatives. The results suggest that 97.1% of web services have at least one vulnerability of these attacks. We also determined a ranking of these attacks against web services.
Keywords: Web services; XML; firewalls; program testing; DoS attacks; SOAP message; SQL injection attack; Web service testing; XML bomb attack; XPath injection attack; XSS attack; black-box approach; cross-site scripting attack; denial-of-services attacks; distributed systems; dynamic connections; firewalls; fuzzing scan attack; injection attacks; invalid type attack; malformed XML attack; malicious scripts; penetration testing; phishing; security testing; soapUI vulnerability scanner; vulnerability detection; Security; Servers; Simple object access protocol; Testing; Weapons; Cross-site Scripting; Fuzzing Scan; Invalid Types; Malformed XML; SQL Injection; XML Bomb; XPath Injection; XSS; penetration testing; web services (ID#: 15-7290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069095&isnumber=7069073

 

Zibordi de Paiva, O.; Ruggiero, W.V., “A Survey on Information Flow Control Mechanisms in Web Applications,” in High Performance Computing & Simulation (HPCS), 2015 International Conference on, vol., no., pp. 211–220, 20–24 July 2015. doi:10.1109/HPCSim.2015.7237042
Abstract: Web applications are nowadays ubiquitous channels that provide access to valuable information. However, web application security remains problematic, with Information Leakage, Cross-Site Scripting and SQL-Injection vulnerabilities — which all present threats to information — standing among the most common ones. On the other hand, Information Flow Control is a mature and well-studied area, providing techniques to ensure the confidentiality and integrity of information. Thus, numerous works were made proposing the use of these techniques to improve web application security. This paper provides a survey on some of these works that propose server-side only mechanisms, which operate in association with standard browsers. It also provides a brief overview of the information flow control techniques themselves. At the end, we draw a comparative scenario between the surveyed works, highlighting the environments for which they were designed and the security guarantees they provide, also suggesting directions in which they may evolve.
Keywords: Internet; SQL; security of data; SQL-injection vulnerability; Web application security; cross-site scripting; information confidentiality; information flow control mechanisms; information integrity; information leakage; server-side only mechanisms; standard browsers; ubiquitous channels; Browsers; Computer architecture; Context; Security; Standards; Web servers; Cross-Site Scripting; Information Flow Control; Information Leakage; SQL Injection; Web Application Security (ID#: 15-7291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237042&isnumber=7237005

 

Gillman, D.; Yin Lin; Maggs, B.; Sitaraman, R.K., “Protecting Websites from Attack with Secure Delivery Networks,” in Computer, vol. 48, no. 4, pp. 26–34, April 2015. doi:10.1109/MC.2015.116
Abstract: Secure delivery networks can help prevent or mitigate the most common attacks against mission-critical websites. A case study from a leading provider of content delivery services illustrates one such network’s operation and effectiveness. The Web extra at https://youtu.be/4FRRI0aJLQM is an overview of the evolving threat landscape with Akamai Director of Web Security Solutions Product Marketing, Dan Shugrue. Dan also shares how Akamai’s Kona Site Defender service handles the increasing frequency, volume and sophistication of Web attacks with a unique architecture that is always on and doesn’t degrade performance.
Keywords: Web sites; security of data; Web attacks; Website protection; content delivery services; mission-critical Websites; secure delivery networks; Computer crime; Computer security; Firewalls (computing); IP networks; Internet; Protocols; Akamai Technologies; DDoS attacks; DNS; Domain Name System; Internet/Web technologies; Operation Ababil; SQL injection; WAF; Web Application Firewall; XSS; cache busting; cross-site scripting; cybercrime; distributed denial-of-service attacks; distributed systems; floods; hackers; security (ID#: 15-7292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085639&isnumber=7085638

 

Antunes, N.; Vieira, M., “Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples,” in Services Computing, IEEE Transactions on, vol. 8, no. 2, pp. 269–283, March–April 2015. doi:10.1109/TSC.2014.2310221
Abstract: Selecting a vulnerability detection tool is a key problem that is frequently faced by developers of security-critical web services. Research and practice shows that state-of-the-art tools present low effectiveness both in terms of vulnerability coverage and false positive rates. The main problem is that such tools are typically limited in the detection approaches implemented, and are designed for being applied in very concrete scenarios. Thus, using the wrong tool may lead to the deployment of services with undetected vulnerabilities. This paper proposes a benchmarking approach to assess and compare the effectiveness of vulnerability detection tools in web services environments. This approach was used to define two concrete benchmarks for SQL Injection vulnerability detection tools. The first is based on a predefined set of web services, and the second allows the benchmark user to specify the workload that best portrays the specific characteristics of his environment. The two benchmarks are used to assess and compare several widely used tools, including four penetration testers, three static code analyzers, and one anomaly detector. Results show that the benchmarks accurately portray the effectiveness of vulnerability detection tools (in a relative manner) and suggest that the proposed benchmarking approach can be applied in the field.
Keywords: Web services; program diagnostics; security of data; SQL injection vulnerability detection tools; anomaly detector; benchmarking approach; false positive rates; penetration testers; security-critical Web services; static code analyzers; vulnerability coverage; Benchmark testing; Computer bugs; Measurement; Security; Benchmarking; and runtime anomaly detection; penetration testing; static analysis; vulnerability detection (ID#: 15-7293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6763052&isnumber=7080963

 

Hermerschmidt, L.; Kugelmann, S.; Rumpe, B., “Towards More Security in Data Exchange: Defining Unparsers with Context-Sensitive Encoders for Context-Free Grammars,” in Security and Privacy Workshops (SPW), 2015 IEEE, vol., no., pp. 134–141, 21–22 May 2015. doi:10.1109/SPW.2015.29
Abstract: To exchange complex data structures in distributed systems, documents written in context-free languages are exchanged among communicating parties. Unparsing these documents correctly is as important as parsing them correctly because errors during unparsing result in injection vulnerabilities such as cross-site scripting (XSS) and SQL injection. Injection attacks are not limited to the web world. Every program that uses input to produce documents in a context-free language may be vulnerable to this class of attack. Even for widely used languages such as HTML and JavaScript, there are few approaches that prevent injection attacks by context-sensitive encoding, and those approaches are tied to the language. Therefore, the aim of this paper is to derive context-sensitive encoder from context-free grammars to provide correct unparsing of maliciously crafted input data for all context-free languages. The presented solution integrates encoder definition into context-free grammars and provides a generator for context-sensitive encoders and decoders that are used during (un)parsing. This unparsing process results in documents where the input data does neither influence the structure of the document nor change their intended semantics. By defining encoding during language definition, developers who use the language are provided with a clean interface for writing and reading documents written in that language, without the need to care about security-relevant encoding.
Keywords: Internet; context-free grammars; context-free languages; context-sensitive grammars; data structures; electronic data interchange; security of data; HTML; JavaScript; SQL injection; XSS; complex data structures; context-sensitive decoders; context-sensitive encoders; cross-site scripting; data exchange security; distributed systems; injection attack prevention; security-relevant encoding; unparsing process; Context; Decoding; Encoding; Grammar; Libraries; Security; context-sensitive encoder; encoding table; injection vulnerability; unparser (ID#: 15-7294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163217&isnumber=7163193

 

Zhong, Yang; Asakura, Hiroshi; Takakura, Hiroki; Oshima, Yoshihito, “Detecting Malicious Inputs of Web Application Parameters Using Character Class Sequences,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual on, vol. 2, no., pp. 525–532, 1–5 July 2015. doi:10.1109/COMPSAC.2015.73
Abstract: Web attacks that exploit vulnerabilities of web applications are still major problems. The number of attacks that maliciously manipulate parameters of web applications such as SQL injections and command injections is increasing nowadays. Anomaly detection is effective for detecting these attacks, particularly in the case of unknown attacks. However, existing anomaly detection methods often raise false alarms with normal requests whose parameters differ slightly from those of learning data because they perform strict feature matching between characters appeared as parameter values and those of normal profiles. We propose a novel anomaly detection method using the abstract structure of parameter values as features of normal profiles in this paper. The results of experiments show that our approach reduced the false positive rate more than existing methods with a comparable detection rate.
Keywords: Accuracy; Electronic mail; Feature extraction; Payloads; Servers; Training; Training data; Anomaly detection; Attack detection; HTTP; Web application (ID#: 15-7295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273662&isnumber=7273573

 

Wang, Yaohui; Wang, Dan; Zhao, Wenbing; Liu, Yuan, “Detecting SQL Vulnerability Attack Based on the Dynamic and Static Analysis Technology,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, no.,
pp. 604–607, 1–5 July 2015.  doi:10.1109/COMPSAC.2015.277
Abstract: Targeting at PHP program, this paper proposes an SQL vulnerability detection method based on the injection analysis technology. This method makes a detailed analysis on the one-time injection in the aspects of data flow and program behavior, on the basis of the combination of dynamic and static analysis technique. Then it implements the SQL vulnerability determination algorithm which is based on lexical feature comparison. At last, this paper combines alias analysis technology, behavior model and SQL which is based on lexical feature comparison to design and establish a prototype system for SQL vulnerability detection. The experiment shows that our system has a good strong ability of SQL vulnerability detection and very low time cost.
Keywords: Algorithm design and analysis; Analytical models; Arrays; Computer bugs; Feature extraction; Prototypes; Testing; SQL vulnerabilities; combination of static and dynamic technique; alias analysis; behavior model (ID#: 15-7296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273432&isnumber=7273299

 

Trancoso, P., “Getting Ready for Approximate Computing: Trading Parallelism for Accuracy for DSS Workloads,” in Parallel and Distributed Computing (ISPDC), 2015 14th International Symposium on, vol., no., pp. 3–3, June 29 2015–July 2 2015. doi:10.1109/ISPDC.2015.39
Abstract: Summary form only given. Processors have evolved dramatically in the last years and current multicore systems deliver very high performance. We are observing a rapid increase in the number of cores per processor thus resulting in more dense and powerful systems. Nevertheless,this evolution will meet several challenges such as power consumption, and reliability. It is expected that, in order to improve the efficiency, future processors will contain units that are able to operate at a very low power consumption with the draw back of not guaranteeing the correctness of the produced results. This model is known as Approximate Computing. One interesting approach to exploit Approximate Computing is to make applications aware of the errors and react accordingly. For this work we focus on the Decision Support System Workloads and in particular the standard TPC-H set of queries. We first define a metric that quantifies the correctness of a query result — Quality of Result (QoR). Using this metric we analyse the impact of relaxing the correctness in the DBMS on the accuracy of the query results. In order to improve the accuracy of the results we propose a dynamic adaptive technique that is implemented as a tool above the DBMS. Using heuristics, this tool spawns a number of replica query executions on different cores and combines the results as to improve the accuracy. We evaluated our technique using real TPC-H queries and data on PostgreSQL with a simple fault-injection to emulate the Approximate Computing model. The results show that for the selected scenarios, the proposed technique is able to increase the QoR with a cost in parallel resources smaller than any alternative static approach. The results are very encouraging since the QoR is within 7% of the best possible.
Keywords: SQL; database management systems; decision support systems; multiprocessing systems; parallel processing; query processing; DBMS; DSS workloads; PostgreSQL; QoR; approximate computing; decision support system; future processors; multicore systems; parallelism; power consumption; quality of result; standard TPC-H query set; static approach; Accuracy; Computational modeling; Computer architecture; Computer science; Decision support systems; Parallel processing; Program processors (ID#: 15-7297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165123&isnumber=7165113


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Security by Default 2015

 

 
SoS Logo

Security by Default

2015


One of the broad goals of the Science of Security project is to understand more fully the scientific underpinnings of cybersecurity. With this knowledge, the potential for developing systems that, if following these scientific principles, 
is presumed secure. In the meantime, security by default remains a topic of interest and some research. The articles cited here were presented in 2015.



Mustafa, M.A.; Ning Zhang; Kalogridis, G.; Zhong Fan, “MUSP: Multi-service, User Self-controllable and Privacy-preserving System for Smart Metering,” in Communications (ICC), 2015 IEEE International Conference on , vol., no., pp. 788–794, 8-12 June 2015. doi:10.1109/ICC.2015.7248418
Abstract: This paper proposes a Multi-service, User Self-controllable and Privacy-preserving (MUSP) system for secure smart metering. This system has a number of novel properties. Firstly, it can report users’ fine-grained consumption data to grid operators and suppliers securely and with user privacy preservation capability. These are achieved by using a homomorphic encryption technique in conjunction with selective data aggregation and distribution methods, so only the aggregated data are delivered to the authorised data recipients only on a need-to-know basis. Secondly, it allows suppliers to access their customers’ attributable meter readings regularly. To protect users’ privacy, suppliers, by default, can access new data only at a low frequency (e.g. once a month). However, MUSP allows users (1) to adjust (control) this frequency and (2) to release new data by demand (e.g. when change of tariff occurs), thus putting users’ privacy preservation in their own hands. Thirdly, it is equipped with an easy and user friendly supplier switching facility to allow users to switch providers easily and conveniently. Security analysis and performance evaluation demonstrate that the MUSP system can protect users’ privacy while providing these services in an efficient and scalable manner.
Keywords: cryptography; data acquisition; smart meters; MUSP system; distribution methods; homomorphic encryption technique; meter reading; multiservice user self controllable and privacy preserving system; need-to-know basis; secure smart metering; selective data aggregation; Cryptography; Data privacy; Meter reading; Protocols; Registers; Smart grids; Switches (ID#: 15-7253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248418&isnumber=7248285

 

Trang Nguyen, “Using Unrestricted Mobile Sensors to Infer Tapped and Traced User Inputs,” in Information Technology: New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 151–156, 13–15 April 2015. doi:10.1109/ITNG.2015.29
Abstract: As of January 2014, 58 percent of Americans over the age of 18 own a smart phone. Of these smart phones, Android devices provide some security by requiring that third party application developers declare to users which components and features their applications will access. However, the real time environmental sensors on devices that are supported by the Android API are exempt from this requirement. We evaluate the possibility of exploiting the freedom to discretely use these sensors and expand on previous work by developing an application that can use the gyroscope and accelerometer to interpret what the user has written, even if trace input is used. Trace input is a feature available on Samsung’s default keyboard as well as in many popular third-party keyboard applications. The inclusion of trace input in a key logger application increases the amount of personal information that can be captured since users may choose to use the time-saving trace-based input as opposed to the traditional tap-based input. In this work, we demonstrate that it is indeed possible to recover both tap and trace inputted text using only motion sensor data.
Keywords: accelerometers; application program interfaces; gyroscopes; invasive software; smart phones; Android API; Android device; accelerometer; key logger application; keyboard application; mobile security; motion sensor data; personal information; real-time environmental sensor; smart phone; tapped user input; traced user input; unrestricted mobile sensor; Accelerometers; Accuracy; Feature extraction; Gyroscopes; Keyboards; Sensors; Support vector machines; key logger; mobile malware; mobile security; motion sensors; spyware (ID#: 15-7254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113464&isnumber=7113432

 

Basavaraj, V.; Noyes, D.; Fiondella, L.; Lownes, N., “Mitigating the Impact of Transportation Network Disruptions on Evacuation,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–7,
14–16 April 2015. doi:10.1109/THS.2015.7225308
Abstract: Homeland Security Presidential Directive-8 establishes a framework for national preparedness, including a vision, specific scenarios of concern, as well as a task list and target capabilities to be developed. The Department of Homeland Security Science and Technology Directorate has fostered enhanced resilience through sponsorship of tools to simulate the impact of the various disaster scenarios identified. However, the default simulations implemented for each of these scenarios implicitly assume availability of public transportation networks for task such as evacuation and response, yet such availability cannot be guaranteed without explicit consideration of the triggering events on transportation networks. Transportation is especially important as a majority of the scenarios indicate that over half of the affected population will need to be evacuated or self-evacuate and this population may be on the order of hundreds of thousands of people. Given the volume of traffic such scenarios may generate, the automobile transportation network will need to carry the majority of this flow of evacuees. Thus, methods to assess and mitigate the negative impact of transportation network disruptions on all aspects of disaster management, will be essential to reduce communal risk. This paper examines the criticality of public transportation in the context of the planning scenarios, suggesting methods to explicitly incorporate the impact of transportation network disruption. Methods based on dynamic traffic assignment are explored and applied to a small hypothetical scenario inspired by the 2010 Times Square car bombing attempt.
Keywords: automobiles; emergency management; national security; planning; public transport; risk analysis; road traffic; Department of Homeland Security Science and Technology Directorate; Homeland Security Presidential Directive-8; automobile transportation network; communal risk reduction; default simulations; disaster management; dynamic traffic assignment; evacuation; public transportation network; transportation network disruption impact mitigation; triggering events; Automobiles; Hospitals; Planning; Sociology; Statistics; Vehicle dynamics (ID#: 15-7255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225308&isnumber=7190491

 

Shirey, R.G.; Hopkinson, K.M.; Stewart, K.E.; Hodson, D.D.; Borghetti, B.J., “Analysis of Implementations to Secure Git for Use as an Encrypted Distributed Version Control System,” in System Sciences (HICSS), 2015 48th Hawaii International Conference on, vol., no., pp. 5310–5319, 5–8 Jan. 2015. doi:10.1109/HICSS.2015.625
Abstract: This paper analyzes two existing methods for securing Git repositories, Git-encrypt and Git-crypt, by comparing their performance relative to the default Git implementation. Securing a Git repository is necessary when the repository contains sensitive or restricted data. This allows the repository to be stored on any third-party cloud provider with assurance that even if the repository data is leaked, it will remain secure. The analysis of current Git encryption methods is done through a series of tests that examines the performance trade-offs made for added security. This performance is analyzed in terms of size, time, and functionality using three different Git repositories of varying size. The three experiments include initializing and populating a repository, compressing a repository through garbage collection, and modifying then committing files to the repository. The results show that Git maintains functionality with each of these two encryption implementations at the cost of time and repository size. The time increase is found to be a factor ranging from 14 to 38 times the original time. The size increase over multiple commits of edited files is found to increase linearly proportional to the working set of files.
Keywords: cryptography; Git repositories; Git-crypt; Git-encrypt; encrypted distributed version control system; restricted data; sensitive data; Computers; Control systems; Encryption; Kernel; Linux; Vectors; Cloud; Cryptography; Distributed; Git; Open Source; Repository; Secure; Software; Version Control (ID#: 15-7256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070454&isnumber=7069647

 

Biryukov, A.; Pustogarov, I., “Bitcoin over Tor Isn't a Good Idea,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 122–134, 17–21 May 2015. doi:10.1109/SP.2015.15
Abstract: Bit coin is a decentralized P2P digital currency in which coins are generated by a distributed set of miners and transactions are broadcasted via a peer-to-peer network. While Bit coin provides some level of anonymity (or rather pseudonymity) by encouraging the users to have any number of random-looking Bit coin addresses, recent research shows that this level of anonymity is rather low. This encourages users to connect to the Bit coin network through anonymizers like Tor and motivates development of default Tor functionality for popular mobile SPV clients. In this paper we show that combining Tor and Bit coin creates a new attack vector. A low-resource attacker can gain full control of information flows between all users who chose to use Bit coin over Tor. In particular the attacker can link together user’s transactions regardless of pseudonyms used, control which Bit coin blocks and transactions are relayed to user and can delay or discard user’s transactions and blocks. Moreover, we show how an attacker can fingerprint users and then recognize them and learn their IP addresses when they decide to connect to the Bit coin network directly.
Keywords: IP networks; peer-to-peer computing; security of data; Bit coin network; Bitcoin; IP address; decentralized P2P digital currency; default Tor functionality; information flow; low-resource attacker; peer-to-peer network; popular mobile SPV client; pseudonymity; random-looking Bit coin address; user transactions; Bandwidth; Databases; Online banking; Peer-to-peer computing; Relays; Servers; Anonymity; Bitcoin; P2P; Security; Tor; cryptocurrency (ID#: 15-7257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163022&isnumber=7163005


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Theoretical Cryptography 2015

 

 
SoS Logo

Theoretical Cryptography

2015


Cryptography can only exist if there is a mathematical hardness to it so constructed as to be able to maintain a desired functionality, even under malicious attempts to change or destroy the prescribed functionality. Hence, the foundations of theoretical cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural security concerns mathematically using probability-based definitions, various constructions, complexity theoretic primitives, and proofs of security. Research into theoretical cryptography addresses the question of how to get from X to Z without allowing the adversary to go backwards from Z to X. The work presented here covers a range of approaches and methods, including block ciphers, random grids, obfuscation, and provable security. The work cited here was presented in 2015.



Martins, P.; Sousa, L.; Eynard, J.; Bajard, J.-C., “Programmable RNS Lattice-Based Parallel Cryptographic Decryption,” in Application-specific Systems, Architectures and Processors (ASAP), 2015 IEEE 26th International Conference on, vol., no.,
pp. 149–153, 27–29 July 2015. doi:10.1109/ASAP.2015.7245723
Abstract: Should quantum computing become viable, current public-key cryptographic schemes will no longer be valid. Since cryptosystems take many years to mature, research on post-quantum cryptography is now more important than ever. Herein, lattice-based cryptography is focused on, as an alternative post-quantum cryptosystem, to improve its efficiency. We put together several theoretical developments so as to produce an efficient implementation that solves the Closest Vector Problem (CVP) on Goldreich-Goldwasser-Halevi (GGH)-like cryptosystems based on the Residue Number System (RNS). We were able to produce speed-ups of up to 5.9 and 11.2 on the GTX 780 Ti and i7 4770K devices, respectively, when compared to a single-core optimized implementation. Finally, we show that the proposed implementation is a competitive alternative to the Rivest-Shamir-Adleman (RSA).
Keywords: lattice theory; public key cryptography; quantum computing; quantum cryptography; residue number systems; vectors; CVP; GGH-like cryptosystems; GTX 780 Ti devices; Goldreich-Goldwasser-Halevi-like cryptosystems; closest vector problem; i7 4770K devices; lattice-based cryptography; post-quantum cryptography; programmable RNS lattice-based parallel cryptographic decryption; public-key cryptographic schemes; residue number system; Graphics processing units; Lattices; Parallel processing; Public key cryptography; Random access memory; Zinc (ID#: 15-7227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245723&isnumber=7245687

 

Koziol, Fryderyk; Borowik, Grzegorz; Wozniak, Marcin; Chaczko, Zenon, “Toward Dynamic Signal Coding for Safe Communication Technology,” in Computer Aided System Engineering (APCASE), 2015 Asia-Pacific Conference on, vol., no.,
pp. 246–251, 14–16 July 2015. doi:10.1109/APCASE.2015.50
Abstract: This paper gives a theoretical background to dynamic generation of primitive polynomials, their usage in many fields including cryptography for a mobile communication systems. Presented polynomials and their generation over a Galois field is discussed. Additionally, the basic properties and arithmetic methods over finite fields of characteristic 3 are presented. The main objective of this paper is to outline the mathematical background for design and implementation for dynamic coding in mobile communication technology widely applied in telephones, computers and any device communicating over TCP/IP protocol.
Keywords: Ciphers; Encryption; Generators; Polynomials; Registers; dynamic coding; encryption; stream ciphers (ID#: 15-7228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287027&isnumber=7286975

 

Chaohui Du; Guoqiang Bai, “Towards Efficient Discrete Gaussian Sampling for Lattice-Based Cryptography,” in Field Programmable Logic and Applications (FPL), 2015 25th International Conference on, vol., no., pp. 1–6, 2–4 Sept. 2015. doi:10.1109/FPL.2015.7293949
Abstract: Modern lattice-based public key cryptosystems usually require sampling from discrete Gaussian distributions. In this paper, we propose a novel implementation of cumulative distribution function (CDF) inversion sampler with high precision and large tail bound. It has maximum statistical distance of 2−90 to a theoretical discrete Gaussian distribution. Our CDF inversion sampler exploits piecewise comparison to save more than 90% random bits and reduce the required large comparators to two small comparators. We speed up the sampler by using a small lookup table, and the hit rate of the lookup table is as high as 94%. With these optimizations, our sampler takes on average 9.44 random bits and 2.28 clock cycles to generate a sample. It consumes 1 block RAM and 17 slices on a Spartan-6 FPGA. With additional 13 slices, our sampler is able to generate n samples within around 1.14n clock cycles.
Keywords: FPGA; Ring-LWE; discrete Gaussian sampler; inverse CDF; lattice-based cryptography; learning with errors
(ID#: 15-7229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7293949&isnumber=7293744

 

Chen, Dajiang; Jiang, Shaoquan; Qin, Zhiguang, “Message Authentication Code over a Wiretap Channel,” in Information Theory (ISIT), 2015 IEEE International Symposium on, vol., no., pp. 2301–2305, 14–19 June 2015. doi:10.1109/ISIT.2015.7282866
Abstract: Message Authentication Code (MAC) is a keyed function fK such that when Alice, who shares the secret K with Bob, sends fK(M) to the latter, Bob will be assured of the integrity and authenticity of M. Traditionally, it is assumed that the channel is noiseless. Unfortunately, Maurer showed that in this case an attacker can succeed with probability equation after authenticating ∓ messages, where H(K) is the entropy of K. In this paper, we consider the setting where the channel is noisy. Specifically, Alice and Bob are connected by a discrete memoryless channel (DMC) W1 and a noiseless but insecure channel. In addition, there is a DMC W2 between Alice and attacker Oscar. We regard the noisy channel as an expensive resource and define the authentication rate ρauth as the ratio of message length to the number n of channel W1 uses. The security of this model depends on the channel coding for fK(M). A natural coding scheme is to use the secrecy capacity achieving code of Csiszár and Körner. Intuitively, this is also the optimal strategy. However, we propose a coding scheme that achieves a higher ρauth. Our crucial point is that under a secrecy capacity code, Bob can fully recover fK(M) while in our model this is not necessary as we only need to detect the existence of the modification. How to detect the malicious modification without recovering fK(M) is the main contribution of this work. We achieve this through random coding techniques.
Keywords: Authentication; Channel coding; Computational modeling; Cryptography; Message authentication; Noise measurement; information theoretical security; wiretap channel (ID#: 15-7230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282866&isnumber=7282397

 

Shu-Di Bao; Yang Lu; Yan-Kai Yang; Chun-Yan Wang; Meng Chen; Guang-Zhong Yang, “A Data Partitioning and Scrambling Method to Secure Cloud Storage with Healthcare Applications,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 478–482, 8–12 June 2015. doi:10.1109/ICC.2015.7248367
Abstract: With increasing use of cloud storage for healthcare applications, potential security risks and the need for enhanced security solutions are becoming a pressing issue for clinical adoption. In this paper, a data partitioning and scrambling method at the application layer is proposed for healthcare data, where a tiny part of the original data is used to scramble the remaining data without any cryptographic key, and the former is kept locally while the latter under extra protection is sent to cloud platforms. Theoretical and experimental analyses have been carried out to demonstrate the security performance of the proposed method, which can be easily deployed in any existing communication systems as an add-on for security.
Keywords: cloud computing; cryptography; health care; security of data; cloud storage; data partitioning and scrambling method; healthcare; Cloud computing; Databases; Encryption; Medical services; Wireless communication; Wireless sensor networks; Healthcare; cloud; data partitioning; data scrambling (ID#: 15- 7231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248367&isnumber=7248285

 

Liwei Zhang; Ding, A.A.; Yunsi Fei; Pei Luo, “Efficient 2nd-Order Power Analysis on Masked Devices Utilizing Multiple Leakage,” in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, vol., no., pp. 118–123,
5–7 May 2015. doi:10.1109/HST.2015.7140249
Abstract: A common algorithm-level effective countermeasure against side-channel attacks is random masking. However, second-order attack can break first-order masked devices by utilizing power values at two time points. Normally 2nd-order attacks require the exact temporal locations of the two leakage points. Without profiling, the attacker may only have an educated guessing window of size nw for each potential leakage point. An attack with exhaustive search over combinations of the two leakage points will lead to computational complexity of O(n2w). Waddle and Wagner introduced FFT-based attack with a complexity of O(nw log(nw)) in CHES 2004 [1]. Recently Belgarric et al. proposed five preprocessing techniques using time-frequency conversion tools basing on FFT in [2]. We propose a novel efficient 2nd-order power analysis attack, which pre-processes power traces with FFT to find multiple candidate leakage point pairs and then combines the attacks at multiple candidate pairs into one single attack. We derive the theoretical conditions for two different combination methods to be successful. The resulting attacks retain computational complexity of O(nw log(nw)) and are applied on two data sets, one set of power measurements of an FPGA implementation of masked AES scheme and the other set of measurements from DPA Contest V4 for a software implementation of masked AES. Our attacks improve over the previous FFT-based attacks, particularly when the window size nw is large. Each of the two attacks works better respectively on different data sets, confirming the theoretical conditions.
Keywords: computational complexity; cryptography; fast Fourier transforms; AES scheme; DPA contest V4; FFT-based attack; FPGA implementation; O(n2w); computational complexity; exhaustive search; first-order masked devices; novel efficient 2nd-order power analysis attack; random masking; second-order attack; side-channel attacks; time-frequency conversion tools; Computational complexity; Correlation; Hardware; Noise; Power measurement; Security; Software; Maximum attack; majority vote attack; statistical model (ID#: 15-7232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140249&isnumber=7140225

 

Tena-Sánchez, E.; Acosta, A.J., “DPA Vulnerability Analysis on Trivium Stream Cipher Using an Optimized Power Model,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 1846–1849, 24–27 May 2015. doi:10.1109/ISCAS.2015.7169016
Abstract: In this paper, a Differential Power Analysis (DPA) vulnerability analysis on Trivium stream cipher is presented. Compared to the two previously presented DPA attacks on Trivium, we retrieve the whole key without making any hypothesis during the attack. An optimized power model is proposed allowing the power trace acquisition without making any algorithmic-noise removement thus simplifying the attack strategy considerably. The theoretical vulnerability analysis is presented and then checked developing a simulation-based DPA attack on a standard CMOS Trivium implementation in a 90nm TSMC technology. The results show that our attack is successful for random keys, saving in computer resources and time respecting to previously-reported attacks. The attack is independent on technology used for the implementation of Trivium and can be used to measure the security of novel Trivium implementations.
Keywords: CMOS integrated circuits; cryptography; 90nm TSMC technology; DPA vulnerability analysis; Trivium stream cipher; differential power analysis vulnerability analysis; optimized power model; power trace acquisition; random keys; simulation-based DPA attack; standard CMOS Trivium implementation; Algorithm design and analysis; Ciphers; Logic gates; Mathematical model; Power demand; Power measurement (ID#: 15-7233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169016&isnumber=7168553

 

Hassan, Firas; Limbird, Michael; Ullagaddi, Vishwanath; Devabhaktuni, Vijay, “A New Image Stream Encryption Technique,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, vol., no., pp. 1–4, 2–5 Aug. 2015. doi:10.1109/MWSCAS.2015.7282064
Abstract: This paper introduces a self-synchronous stream cipher with a considerable key space. In the proposed technique, the parity bit plane of a public image is used to encrypt the message. Simulation results showed that the histogram of the ciphered image is uniform representing almost equivalent probability of occurrence of each intensity level. The entropy of the ciphered image is almost equal to the theoretical value of 8, indicating that the information leakage in the proposed encryption process is negligible. The correlation coefficients of the ciphered image prove that there exists almost zero correlation between its pixels. The simulation results also showed that the proposed technique is highly sensitive to the four different parts of its password. Also, the plain-text attack analysis gave NPCR and UACI values that are close to ideal. The proposed technique is computationally simpler than other image encryption techniques in the literature. Finally, a high PSMSE of 83 dB in the decoded image was achieved at a ciphered bit error rate of 10−4. Therefore, the proposed stream cipher can be efficiently used in real time multimedia and wireless applications.
Keywords: Adaptive optics; Correlation coefficient; Cryptography; Histograms; Optical imaging; Optical sensors; Real-time systems (ID#: 15-7234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282064&isnumber=7281994

 

Hui Peng; Xiaoying Zhang; Hong Chen; Yao Wu; Juru Zeng; Deying Li, “Enable Privacy Preservation for k-NN Query in Two-Tiered Wireless Sensor Networks,” in Communications (ICC), 2015 IEEE International Conference on, vol., no., pp. 6289–6294, 8–12 June 2015. doi:10.1109/ICC.2015.7249326
Abstract: Wireless sensor network is an important part of the Internet of Things. Preservation of privacy and integrity in wireless sensor networks is extremely urgent and challenging. To address this problem, we propose PPKN, an efficient and privacy-preserving k-NN query protocol in two-tiered wireless sensor networks. Our proposal prevents adversaries from gaining sensitive information of both queries issued by users and data collected by sensor nodes while allows the sink to verify whether results are valid. It offers confidentiality of queries and data by constructing a special code, provides integrity verification by the correlation among data. Moreover, by the implementation of KNQ query framework, PPKN also achieves high efficient in query response and energy consumption. Finally, the theoretical analysis and experiment results show the high performance of PPKN in terms of privacy preservation and query efficiency.
Keywords: data integrity; data privacy; protocols; wireless sensor networks; Internet of Things; KNQ query framework; PPKN; data integrity; energy consumption; privacy-preserving k-NN query protocol; two-tiered wireless sensor network; Cryptography; Data privacy; Energy consumption; Privacy; Protocols; Storage management; Wireless sensor networks (ID#: 15-7235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249326&isnumber=7248285

 

Zhicong Huang; Ayday, E.; Fellay, J.; Hubaux, J.-P.; Juels, A., “GenoGuard: Protecting Genomic Data against Brute-Force Attacks,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 447–462, 17–21 May 2015. doi:10.1109/SP.2015.34
Abstract: Secure storage of genomic data is of great and increasing importance. The scientific community’s improving ability to interpret individuals’ genetic materials and the growing size of genetic database populations have been aggravating the potential consequences of data breaches. The prevalent use of passwords to generate encryption keys thus poses an especially serious problem when applied to genetic data. Weak passwords can jeopardize genetic data in the short term, but given the multi-decade lifespan of genetic data, even the use of strong passwords with conventional encryption can lead to compromise. We present a tool, called Geno Guard, for providing strong protection for genomic data both today and in the long term. Geno Guard incorporates a new theoretical framework for encryption called honey encryption (HE): it can provide information-theoretic confidentiality guarantees for encrypted data. Previously proposed HE schemes, however, can be applied to messages from, unfortunately, a very restricted set of probability distributions. Therefore, Geno Guard addresses the open problem of applying HE techniques to the highly non-uniform probability distributions that characterize sequences of genetic data. In Geno Guard, a potential adversary can attempt exhaustively to guess keys or passwords and decrypt via a brute-force attack. We prove that decryption under any key will yield a plausible genome sequence, and that Geno Guard offers an information-theoretic security guarantee against message-recovery attacks. We also explore attacks that use side information. Finally, we present an efficient and parallelized software implementation of Geno Guard.
Keywords: biology computing; cryptography; data privacy; genetics; statistical distributions; storage management; GenoGuard; HE; brute-force attacks; data breaches; encryption keys; genetic database populations; genetic materials; genomic data protection; honey encryption; information-theoretic confidentiality; parallelized software implementation; passwords; probability distributions; storage security; Bioinformatics; Encoding; Encryption; Genomics; brute-force attack; distribution-transforming encoder; genomic privacy (ID#: 15-7236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163041&isnumber=7163005

 

dos Santos, C.E.; Kijak, E.; Gravier, G.; Schwartz, W.R., “Learning to Hash Faces Using Large Feature Vectors,” in Content-Based Multimedia Indexing (CBMI), 2015 13th International Workshop on, vol., no., pp. 1–6, 10–12 June 2015. doi:10.1109/CBMI.2015.7153611
Abstract: Face recognition has been largely studied in past years. However, most of the related work focus on increasing accuracy and/or speed to test a single pair probe-subject. In this work, we present a novel method inspired by the success of locality sensing hashing (LSH) applied to large general purpose datasets and by the robustness provided by partial least squares (PLS) analysis when applied to large sets of feature vectors for face recognition. The result is a robust hashing method compatible with feature combination for fast computation of a short list of candidates in a large gallery of subjects. We provide theoretical support and practical principles for the proposed method that may be reused in further development of hash functions applied to face galleries. The proposed method is evaluated on the FERET and FRGCv1 datasets and compared to other methods in the literature. Experimental results show that the proposed approach is able to speedup 16 times compared to scanning all subjects in the face gallery.
Keywords: cryptography; face recognition; least squares approximations; LSH; PLS analysis; face gallery; feature vector; hash function; locality sensing hashing; partial least squares analysis; robust hashing method; Accuracy; Face recognition; Feature extraction; Image retrieval; Probes; Robustness; Training (ID#: 15-7237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153611&isnumber=7153597

 

Weize Yu; Uzun, O.A.; Köse, S., “Leveraging On-Chip Voltage Regulators as a Countermeasure Against Side-Channel Attacks,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp. 1–6, 8–12 June 2015. doi:10.1145/2744769.2744866
Abstract: Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
Keywords: convertors; cryptography; integrated circuit interconnections; power consumption; switched capacitor networks; voltage regulators; circuit level techniques; converter-reshuffling technique; countermeasure against side-channel attacks; integrated circuit security; leveraging on-chip voltage regulators; multi-level switched capacitor voltage converters; on-chip power delivery system; power consumption; power trace entropy; Entropy; Monitoring; Power demand; Regulators; Security; System-on-chip; Voltage control; Side-channel attacks; on-chip voltage regulation; power efficiency (ID#: 15-7238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167300&isnumber=7167177

 

Sachdeva, E.; Mishra, S.P., “Improving Method of Correcting AES Keys Obtained from Coldboot Attack,” in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, vol., no., pp. 1–8, 5–7 March 2015. doi:10.1109/ICECCT.2015.7226024
Abstract: Extraction of cryptographic keys, passwords and other sensitive information from the memory, has been made possible by the data remanence property of DRAM. According to it, DRAM can retain its data for several seconds to minutes without power [1]. Cold boot attack proposed in [1] tries to exploit memory remanence property, for extracting probable cryptographic secrets from DRAM. However, extracted information is degraded and needs to be corrected before being used for decrypting the encrypted files. Various methods for correcting this distorted data for different cryptosystems have been proposed [1, 6]. However, it has not been reported much in literature regarding efficacy of these methods. This paper contains results and observations of extensive experiments carried out for correcting AES keys, by varying timings of cold rebooting the PC that varies the % of distorted data. These observations suggest that the proposed methods are theoretical in nature and not effective practically (for cold boot attack) as they could correct keys corresponding to up to 2% of erroneous round key schedules of AES-128 and AES-256. In this paper, an improved algorithm has been proposed for correcting up to 15% of errors in cold boot attack generated as well as randomly generated distorted round key schedules. The proposed algorithm has been successfully implemented to mount the volumes encrypted by popular disk encryption system ‘TrueCrypt’.
Keywords: DRAM chips; cryptography; AES keys; AES-128; AES-256; DRAM; TrueCrypt; coldboot attack; cryptographic keys; data remanence property; disk encryption system; encrypted files; passwords; sensitive information; Capacitors; Cryptography; Distortion; Lead; Random access memory; AES; Cold boot attack; Data remanence; True Crypt (ID#: 15-7239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226024&isnumber=7225915

 

Jinglong Zuo; Delong Cui; Yunfeng Gong; Mei Liu, “A Novel Image Encryption Algorithm Based on Lifting-Based Wavelet Transform,” in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, vol., no., pp. 33–36, 24–26 April 2015. doi:10.1109/ICISCE.2015.16
Abstract: In order to trade-off between computational effects and computational cost of present image encryption algorithm, a novel image encryption algorithm based on lifting-based wavelet transform is proposed in this paper. The image encryption process includes three steps: first the original image was divided into blocks, which were transformed by lifting based wavelet, secondly the wavelet domain coefficients were encryption by random mask which generated by user key, and finally employing Arnold scrambling to encrypt the coefficients. The security of proposed scheme is depended on the levels of wavelet transform, user key, and the times of Arnold scrambling. Theoretical analysis and experimental results demonstrate that the algorithm is favourable.
Keywords: cryptography; image processing; random processes; wavelet transforms; Arnold scrambling; computational cost; computational effects; image encryption algorithm; lifting-based wavelet transform; random mask; user key; wavelet domain coefficients; Correlation; Encryption; Entropy; Filter banks; Wavelet transforms; block-based transformation; fractional Fourier transform; image encryption; information security; random phase mask (ID#: 15-7240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120556&isnumber=7120439

 

Fei Chen; Tao Xiang; Yuanyuan Yang; Cong Wang; Shengyu Zhang, “Secure Cloud Storage Hits Distributed String Equality Checking: More Efficient, Conceptually Simpler, and Provably Secure,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2389–2397, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218627
Abstract: Cloud storage has gained a remarkable success in recent years with an increasing number of consumers and enterprises outsourcing their data to the cloud. To assure the availability and integrity of the outsourced data, several protocols have been proposed to audit cloud storage. Despite the formally guaranteed security, the constructions employed heavy cryptographic operations as well as advanced concepts (e.g., bilinear maps over elliptic curves and digital signatures), and thus are inefficient to admit wide applicability in practice. In this paper, we design a novel secure cloud storage protocol, which is conceptually and technically simpler and significantly more efficient than previous constructions. Inspired by a classic string equality checking protocol in distributed computing, our protocol uses only basic integer arithmetic (without advanced techniques and concepts). As simple as the protocol is, it supports both randomized and deterministic auditing to fit different applications. We further extend the proposed protocol to support data dynamics, i.e., adding, deleting and modifying data, using a novel technique. As a further contribution, we find a systematic way to design secure cloud storage protocols based on verifiable computation protocols. Theoretical and experimental analyses validate the efficacy of our protocol.
Keywords: cloud computing; cryptography; data integrity; digital signatures; distributed processing; protocols; cloud storage protocol; cloud storage security; computation protocol; cryptographic operation; digital signature; distributed computing; string equality checking protocol; Cloud computing; Computational modeling; Computers; Conferences; Protocols; Secure storage; Security (ID#: 15-7241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218627&isnumber=7218353

 

Young-Jin Kang; Hyun Ho Kim; Ndibanje Bruce; Younggoo Park; HoonJae Lee, “Correlation Power Analysis Attack on the Ping Pong-128 Key Stream Generator,” in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, vol., no., pp. 506–509, 24–27 March 2015. doi:10.1109/AINA.2015.228
Abstract: Power analysis attack on cryptographic hardware device aims to study the power consumption while performing operations using secrets keys. Power analysis is a form of Side Channel Attack (SCA) which allows an attacker to compute the key encryption from algorithm using Simple Power Analysis (SPA), Differential Power Analysis (DPA) or Correlation Power Analysis (CPA). The theoretical weaknesses in algorithms or leaked information from physical implementation of a cryptosystem are usually used to break the system. In this paper proved the weakness of PingPong-128 key stream generator which increased outputs key stream of specific non-linearity by adding mutual clock-control structure to previous summation generator through Correlation power analysis attack.
Keywords: cryptography; CPA; DPA; Ping Pong-128 key stream generator; SCA; SPA; correlation power analysis; correlation power analysis attack; cryptographic hardware device; cryptosystem; differential power analysis; key encryption; mutual clock control structure; power consumption; secrets keys; side channel attack; simple power analysis; Algorithm design and analysis; Correlation; Correlation coefficient; Cryptography; Generators; Mathematical model; Power measurement; Correlation Power Analysis; Crypto; PingPong-128 Key Stream Generator; Secrete intermediate Key; Side Channel Attacks (ID#: 15-7242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098013&isnumber=7097928

 

Jianhua Yang; Yongzhong Zhang, “RTT-Based Random Walk Approach to Detect Stepping-Stone Intrusion,” in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, vol., no., pp. 558–563, 24–27 March 2015. doi:10.1109/AINA.2015.236
Abstract: Detecting Stepping-stone intrusion, especially resisting in intruders evasion has been widely and deeply studied and explored since 1995. In this paper, we propose a method by counting matched TCP/IP packets to detect stepping-stone intrusion. Our study shows that this approach not only can detect stepping-stone intrusion with an improved performance, but also can resist in intruders’ evasion, such as time-jittering, and chaff-perturbation. We model stepping-stone intrusion detection as a one dimensional random-walk process. Theoretical analysis shows that in order to obtain the same false positive rate, this approach needs less number of packets monitored than Blum’s approach which is considered state-of-the-art method. The simulation results show that this approach can resist in intruders chaff-perturbation up to 50%.
Keywords: IP networks; computer network security; security of data; RTT based random walk; chaff perturbation; counting matched TCP/IP packet; intruders evasion; one dimensional random walk process; stepping stone intrusion detection; time jittering; Computers; Cryptography; IP networks; Intrusion detection; Monitoring; Resists; intrusion detection; packet matching; random-walk; round-trip time; stepping-stone intrusion; time-jittering (ID#: 15-7243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098021&isnumber=7097928

 

Capotą, M.; Pouwelse, J.; Epema, D., “Decentralized Credit Mining in P2P Systems,” in IFIP Networking Conference (IFIP Networking), 2015, pp. 1–9, 20–22 May 2015. doi:10.1109/IFIPNetworking.2015.7145334
Abstract: Accounting mechanisms based on credit are used in peer-to-peer systems to track the contribution of peers to the community for the purpose of deterring freeriding and rewarding good behavior. Most often, peers earn credit for uploading files, but other activities might be rewarded in the future as well, such as making useful comments or reporting spam. Credit earned can be used for accessing new content, or for receiving preferential treatment in case of network congestion. We define credit mining as the activity performed by peers for the purpose of earning credit. In this paper, we design, implement, and evaluate a system for decentralized credit mining that maximizes the contribution of idle peers to the community by automatically uploading popular files. Building on previous theoretical insights into the economics of communities, we select autonomous algorithms for bandwidth investment as the basis of our credit mining system. Additionally, we describe our experience with important challenges arising from Internet deployment, that are frequently neglected in emulation, including duplicate content avoidance, spam prevention, and the cost of keeping peer information updated. Furthermore, we implement an archival mode of operation, which prevents the disappearance of old content from the community. We show the feasibility and usefulness of our credit mining system through measurements from our implementation on top of Tribler, an Internet-deployed peer-to-peer system.
Keywords: bandwidth allocation; data mining; investment; peer-to-peer computing; unsolicited e-mail; Internet deployment; Internet-deployed peer-to-peer system; P2P systems; Tribler; accounting mechanisms; archival mode; autonomous algorithms; bandwidth investment; decentralized credit mining; duplicate content avoidance; network congestion; peer-to-peer systems; popular file uploading; preferential treatment; spam prevention; Bandwidth; Collaboration; Communities; Cryptography; Feeds; Internet; Peer-to-peer computing (ID#: 15-7244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145334&isnumber=7145285

 

Jingjing Huang; Ting Jiang, “Dynamic Secret Key Generation Exploiting Ultra-Wideband Wireless Channel Characteristics,” in Wireless Communications and Networking Conference (WCNC), 2015 IEEE, vol., no., pp. 1701–1706, 9–12 March 2015. doi:10.1109/WCNC.2015.7127724
Abstract: To guarantee a secure wireless communication between any two transceivers, the shared secret key is a requirement. Communication parties need to encrypt the message with the key to impede an adversary’s eavesdropping. By exploiting Ultra-wideband (UWB) wireless fading channels as a source of common randomness, two transceivers with correlated observations can generate secret keys with information-theoretical security. The state of the art of existing works, however, has neglected to consider efficient channel probing. In this paper, we define a channel probing factor to adaptively change probing interval for obtaining more correlated measurements at the endpoints of wireless communicating link. We use multipath relative delay of UWB channel to generate secret keys. Simulation results show that this dynamic secret key generation mechanism extracting multipath relative delay can achieve good performance in terms of secret key generation rate, key randomness and key-mismatch probability, especially key-mismatch probability compared with received signal strength (RSS)-based method and our prior approach. Furthermore, security analysis is also provided to validate feasibility of the scheme.
Keywords: cryptography; fading channels; probability; radio transceivers; telecommunication security; ultra wideband communication; RSS-based method; UWB wireless fading channels; adversary eavesdropping; channel probing factor; dynamic secret key generation mechanism; information-theoretical security; key randomness; key-mismatch probability; message encryption; multipath relative delay; received signal strength-based method; secret key generation rate; shared secret key; transceivers; ultrawideband wireless channel characteristics; wireless communicating link; wireless communication security; Coherence; Delays; Signal to noise ratio; Training; Transceivers; Wireless networks; UWB; dynamic channel probing; reciprocity; secret key generation (ID#: 15-7245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7127724&isnumber=7127309

 

Abozaid, G.; El-Mahdy, A., “Design Space Exploration for a Co-designed Accelerator Supporting Homomorphic Encryption,” in Control Systems and Computer Science (CSCS), 2015 20th International Conference on, vol., no.,
pp. 431–438, 27–29 May 2015. doi:10.1109/CSCS.2015.14
Abstract: Fully Homomorphic Encryption is currently a sound theoretical approach for cloud security, it is currently not practically used due to the tremendous computation requirements of multiplying very large, million-bit, operands. In this paper, we explore the design space of software/hardware (SW/HW) co-designed accelerator relying on integrating fast software multiplication algorithms with a configurable hardware multiplier. The multiplier is based on a modified serial-parallel multiplier design, in which School-Book is a special case. The paper conducts an analytic performance study, exploring key design space parameters as well as comparing with other design approaches in the literature. Based on an actual FPGA implementation, we estimate a power consumption of 10, Watt, and area-time-power of 20.20 billion transistor-sec-Watt, potentially allowing for promising scalability.
Keywords: cloud computing; cryptography; field programmable gate arrays; hardware-software codesign; logic design; multiplying circuits; reconfigurable architectures; FPGA; SW-HW codesign; School-Book; cloud security; codesigned accelerator supporting homomorphic encryption; configurable hardware multiplier; design space exploration; design space parameter; fast software multiplication algorithm; modified serial-parallel multiplier design; power consumption; software-hardware codesign; Algorithm design and analysis; Hardware; Mathematical model; Parallel processing; Software; Software algorithms; Space exploration; FHE; SW/HW; co-design; large numbers; low-power; multiplication (ID#: 15-7246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168465&isnumber=7168393

 

Yuehua Zhang; Jian Zhang; Guannan Liu, “Design and Implementation of AES Based on ARM920T Processor,” in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, vol., no., pp. 189–193, 24–26 April 2015. doi:10.1109/ICISCE.2015.49
Abstract: This paper presents an optimization of the Irondale algorithm which can speed up execution on ARM920T microprocessor. Irondale was selected as the Advanced Encryption Standard (AES) by the National Institute of Standards and Technology (NIST). First we present a theoretical analysis of the Irondale algorithm and code optimization, and then simulation results of the optimized algorithm on ARM920T microprocessor are presented. The cycles of key schedule for decryption are more than the cycles of key schedule for encryption. Key schedule for decryption has larger memory than key schedule for encryption. Decryption (including decryption key schedule) is slower than encryption (including encryption key schedule). The experiment shows the algorithm can be executed on ARM920T microprocessor efficiently.
Keywords: cryptography; microprocessor chips; AES; ARM920T microprocessor; NIST; Rijndael algorithm; advanced encryption standard; code optimization; decryption; key schedule cycle; national institute of standards and technology; Algorithm design and analysis; Ciphers; Encryption; Niobium; Optimization; Schedules; Standards; ARM microprocessor; advanced encryption standard (AES); key schedule; optimization; rijndael (ID#: 15-7247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120589&isnumber=7120439

 

Amutha, A.; Angel, D., “Structural Analysis of Invertible Graphs,” in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp. 1–4, 9–10 Jan. 2015. doi:10.1109/ISCO.2015.7282329
Abstract: Reliability is an important issue in systems architecture. This paper focuses on a new class of invertible networks which are more reliable in the sense that if there is a failure in the physical components of the system then there always exists an alternate set of nodes to carry out the job in the complement. A graph G is said to be invertible if there exists an inverse vertex cover in G. The contribution of this paper is a new algorithm for recognizing invertible graphs. Our algorithm runs in linear time and is computationally very simple. We present a characterization for invertible graphs in terms of the breadth first search tree and thereby study their theoretical properties.
Keywords: Computers; Cryptography; Edge cover; Invertible graphs; Vertex cover (ID#: 15-7248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282329&isnumber=7282219

 

Chan, V.W.S., “Classical Optical Cryptography,” in Transparent Optical Networks (ICTON), 2015 17th International Conference on, vol., no., pp. 1–4, 5–9 July 2015. doi:10.1109/ICTON.2015.7193389
Abstract: This paper describes a cryptographic technique based on coherent optical communication for fiber or free space networks. A pseudo-random cipher-stream is used to band-spread an optical carrier with coded data. The legitimate receiver uses the agreed upon key to modulate its local oscillator and the resulting beat signal will uncover the band-spread signal. An eavesdropper who does not have the key will find the spread signal with too low signal-to-noise ratio to perform any useful determination of the message sequence. Theoretical bounds based on Shannon’s Theory of Secrecy is used to show the strength of the encoding scheme which is expected to be superior.
Keywords: cryptography; information theory; optical fibre networks; telecommunication security; Shannon theory; band spread signal; classical optical cryptography; coherent optical communication; cryptographic technique; fiber space networks; free space networks; message sequence; optical carrier; signal-to-noise ratio; spread signal; Cryptography; Noise; Optical amplifiers; Optical mixing; Optical receivers; Optical transmitters; optical network security (ID#: 15-7249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193389&isnumber=7193279

 

Hammami, S.; Djemaï, M.; Busawon, K., “On the Use of the Unified Chaotic System in the Field of Secure Communication,” in Control, Engineering & Information Technology (CEIT), 2015 3rd International Conference on, vol., no., pp. 1–6, 25–27 May 2015. doi:10.1109/CEIT.2015.7233114
Abstract: The synchronization of a unified chaotic system, used to encrypt different types of information signals, is investigated in this paper. In the outset, it is proven that such a system possesses three different types of chaos characterizations depending on a system’s parameter, which guarantees a high degree of communication security. Then, the asymptotic convergence of the errors between the states of the master system and those of the slave one is deduced by means of Lyapunov theory. Finally, computer simulations are done to verify the feasibility as well as the efficiency of the proposed theoretical approaches.
Keywords: Lyapunov methods; chaotic communication; convergence; cryptography; image coding; synchronisation; telecommunication security; Lyapunov theory; chaos characterizations; communication security; information signals; unified chaotic system; Chaotic communication; Convergence; Cryptography; Receivers; Synchronization; Transmitters; secure transmission (ID#: 15-7250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7233114&isnumber=7232976

 

Huijie Zhang; Mani Soma.; Shulin Tian, “Power Attacks on DAC Architectures: A Susceptibility Comparison,” in Signals, Circuits and Systems (ISSCS), 2015 International Symposium on, vol., no., pp. 1–4, 9–10 July 2015. doi:10.1109/ISSCS.2015.7203935
Abstract: Hardware security is a significant issue today and has received much attention regarding side-channel attacks, including power attacks. We present a theoretical comparison of power-attack susceptibility of two common DAC architectures using metrics derived from information theory. Simulation results demonstrate the computation method.
Keywords: cryptography; digital-analogue conversion; information theory; security; DAC architecture; digital-to-analog converter; hardware security; information theory; power attack susceptibility; side-channel attack; susceptibility comparison; Capacitors; Computer architecture; Correlation; Hardware; Mutual information; Power demand; Security; differential power attack; mixed-signal circuit; mutual information (ID#: 15-7251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7203935&isnumber=7203913

 

Emmart, N.; Weems, C., “Pushing the Performance Envelope of Modular Exponentiation Across Multiple Generations of GPUs,” in Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, pp.166–176, 25–29 May 2015. doi:10.1109/IPDPS.2015.69
Abstract: Multiprecision modular exponentiation is a key operation in popular encryption schemes such as RSA, but is computationally expensive. Contexts such as handling many secure web connections in a server can demand higher rates of exponent operations than a traditional multicore can support. Graphics processors offer an opportunity to accelerate batches of exponent calculations both by executing them in parallel as well as through parallelizing the operations within the multiprecision arithmetic itself. However, obtaining performance close to the theoretical peak can be extremely challenging. Furthermore, each new generation of GPU architecture can require a substantially different approach to achieve maximum performance. In this paper we show how we improve modular exponentiation performance over prior results by at factors ranging from 2.6 to 24, across generations of NVIDIA GPU, from compute capability 1.1 onward. Of particular interest is the parameter space that must be searched to find the optimal configuration of memory layout, launch geometry, and algorithm for each architecture at different problem sizes. Our efforts have resulted in a set of tools for generating library functions in the PTX assembly language and searching to find these optima. From our experience it can be argued that a new programming paradigm is needed to achieve full performance potential on core library components as GPUs evolve through multiple generations.
Keywords: assembly language; graphics processing units; software libraries; GPU architecture; NVIDIA GPU; PTX assembly language; RSA; compute capability; core library components; encryption schemes; exponent operations; graphics processing unit; graphics processors; launch geometry; library functions; memory layout; multiprecision modular exponentiation performance; multiprocessing arithmetic; optimal configuration; secure Web connections; Computational modeling; Computer architecture; Generators; Graphics processing units; Load modeling; Message systems; Registers; GPU accelerated modular exponentiation; SSL acceleration with GPUs; asymmetric cryptography on GPUs; modular exponentiation (ID#: 15-7252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161506&isnumber=7161257


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Visible Light Communications Security 2015

 

 
SoS Logo

Visible Light Communications Security

2015


Visible light communication (VLC) offers an unregulated and free light spectrum. Potentially, it could be a solution for overcoming overcrowded radio spectrum, especially for wireless communication systems, and doing it securely. In the articles cited here, security issues are addressed related to secure bar codes for smart phones, reducing the impact of ambient light (optical “noise”), physical layer security for indoor visible light, and using xenon flashlights for mobile payments. Additional works covering a broad range of visible light communications topics are also sited. This work is relevant to resilience. These articles cited appeared in 2015.



Aarthi, H.; James, K., “A Novel Protocol Design in Hybrid Networks of Visible Light Communication and OFDMA System,” in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, vol., no., pp. 1–5, 5–7 March 2015. doi:10.1109/ICECCT.2015.7226162
Abstract: In recent years, Visible Light Communication (VLC) has emerged as a complementary technique to overcome the limitations of crowded radio frequency (RF) spectrum. Its superior characteristics include unlicensed wide bandwidth, high security and dual-use nature. It can transmit 4Mb/s in short distance. However VLC using illumination source is naturally suited to broadcast applications, it can also be used for full duplex communication but it resulted in instability of the system as it cannot operate without optical wireless localization technology, hence a hybrid network has been designed in which OFDMA is used for uplink transmission and VLC is used for downlink transmission. In order to demonstrate this a protocol has been proposed by combining the horizontal and vertical handover mechanisms for mobile terminal to resolve user mobility among different VLC hotspots and orthogonal frequency division multiple access(OFDMA).
Keywords: OFDM modulation; mobility management (mobile radio); optical communication; protocols; OFDMA system; downlink transmission; horizontal handover mechanisms; hybrid networks; mobile terminal; orthogonal frequency division multiple access; protocol design; uplink transmission; user mobility; vertical handover mechanisms; visible light communication; Light emitting diodes; Logic gates; Mobile communication; Mobile computing; WiMAX; heterogeneous network; horizontal handover; hybrid visible light communication(VLC) and orthogonal frequency division multiple access (OFDMA) system; vertical handover (ID#: 15-7403)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226162&isnumber=7225915

 

Chen, Y.A.; Chang, Y.T.; Tseng, Y.C.; Chen, W.T., “A Framework for Simultaneous Message Broadcasting Using CDMA-Based Visible Light Communications,” in Sensors Journal, IEEE, vol. 15, no. 12, pp. 6819–6827, Dec. 2015. doi:10.1109/JSEN.2015.2463684
Abstract: Internet of Things applications are fast growing recently. One of the things that has a lot of potential is the lighting equipment since it is widely used in our daily life. Recently, the technology of visible light communication (VLC) has been widely discussed. VLC has several advantages, such as freedom of license, line-of-sight security, and less health concern compared with radio-based systems. In addition, the rapid progress of light emitting diode (LED) technology by solid-state lighting allows VLC to be easily deployed and integrated with the existing lighting infrastructure at low costs. However, VLC, when integrated with lighting infrastructure, is usually for one-way communication and is highly sensitive to external interfering light. Thus, transmitting or broadcasting multiple messages simultaneously over a visible light channel without any preprocessing may result in serious collisions. In this paper, we propose a framework to tackle these problems by optical code division multiple access (CDMA) for VLC. With our approach, a VLC receiver can enter an environment without any prior configuration and can be designed with simple hardware. Even a mobile device with a high-resolution photodiode sensor can be used as a receiver. We demonstrate an application of indoor positioning by querying the location service provider on the Internet with the IDs decoded from the received light signals. The prototyping results reveal some communication properties of CDMA-based VLC and its potential for indoor positioning applications.
Keywords: Brightness; Broadcasting; Lighting; Multiaccess communication; Optical sensors; Optical transmitters; Receivers; Broadcast; CDMA; IEEE 802.15.7; broadcast; localization; pervasive computing; visible light communication (ID#: 15-7404)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7174962&isnumber=7286895

 

Mostafa, A.; Lampe, L., “Physical-Layer Security for MISO Visible Light Communication Channels,” in Selected Areas in Communications, IEEE Journal on, vol. 33, no. 9, pp. 1806–1818, Sept. 2015. doi:10.1109/JSAC.2015.2432513
Abstract: This paper considers improving the confidentiality of visible light communication (VLC) links within the framework of physical-layer security. We study a VLC scenario with one transmitter, one legitimate receiver, and one eavesdropper. The transmitter has multiple light sources, while the legitimate and unauthorized receivers have a single photodetector, each. We characterize secrecy rates achievable via transmit beamforming over the multiple-input, single-output (MISO) VLC wiretap channel. For VLC systems, intensity modulation (IM) via light-emitting diodes (LEDs) is the most practical transmission scheme. Because of the limited dynamic range of typical LEDs, the modulating signal must satisfy certain amplitude constraints. Hence, we begin with deriving lower and upper bounds on the secrecy capacity of the scalar Gaussian wiretap channel subject to amplitude constraints. Then, we utilize beamforming to obtain a closed-form secrecy rate expression for the MISO wiretap channel. Finally, we propose a robust beamforming scheme to consider the scenario wherein information about the eavesdropper’s channel is imperfect due to location uncertainty. A typical application of the proposed scheme is to secure the communication link when the eavesdropper is expected to exist within a specified area. The performance is measured in terms of the worst-case secrecy rate guaranteed under all admissible realizations of the eavesdropper’s channel.
Keywords: intensity modulation; light emitting diodes; optical communication; optical receivers; optical transmitters; photodetectors; LED; MISO visible light communication channels; amplitude constraints; eavesdropper channel; intensity modulation; light emitting diodes; multiple-input single-output VLC wiretap channel; one legitimate receiver; one transmitter; physical layer security; robust beamforming scheme; scalar Gaussian wiretap channel; secrecy capacity; single photodetector; transmit beamforming; Array signal processing; Light emitting diodes; Lighting; Optical distortion; Receivers; Security; Upper bound; MISO wiretap VLC channel; Visible light communication; amplitude constraint; robust beamforming; secrecy capacity bounds; worst-case secrecy rate (ID#: 15-7405)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106457&isnumber=7206775

 

Krichene, D.; Sliti, M.; Abdallah, W.; Boudriga, N., “An Aeronautical Visible Light Communication System to Enable In-Flight Connectivity,” in Transparent Optical Networks (ICTON), 2015 17th International Conference on, vol., no., pp. 1–6, 5–9 July 2015. doi:10.1109/ICTON.2015.7193336
Abstract: This paper proposes an aeronautical network architecture based on visible light communication (VLC) technology which targets the distribution of in-flight entertainment services. To this purpose, we investigate the deployment of LEDs within an aircraft cabin using two different wavelength assignment methods in the VLC cells. The first method combines both WDM and Direct sequence OCDMA techniques to reduce intra-cell and inter-cell interferences. In the second one, a two-dimensional OCDMA scheme is used to enable efficient sharing of resources between users. Moreover, an FSO-based inter-VLC-cells communication scheme is described to enable connectivity distribution among LEDs and inter-cells handover. This scheme is based on all-optical switching using code-words that uniquely identify the cells. Finally, a simulation work is conducted to evaluate the bit error rate of the proposed access control schemes for different configurations of the VLC system.
Keywords: aircraft communication; cellular radio; code division multiple access; error statistics; interference suppression; light emitting diodes; mobility management (mobile radio); spread spectrum communication; wavelength assignment; wavelength division multiplexing; FSO-based interVLC cell communication scheme; LED; VLC technology aeronautical network architecture; WDM technique; access control scheme; aeronautical visible light communication system; aircraft cabin; all-optical switching; bit error rate; direct sequence OCDMA technique; in-flight connectivity distribution; in-flight entertainment service distribution; intercell handover; intercell interference reduction; intracell interference reduction; resource sharing; two-dimensional OCDMA scheme; wavelength assignment method; Adaptive optics; Aircraft; Bit error rate; Integrated optics; Interference; Light emitting diodes; Optical switches; FSO; OCDMA; VLC; in-flight connectivity (ID#: 15-7406)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193336&isnumber=7193279

 

Zhang, B.; Ren, K.; Xing, G.; Fu, X.; Wang, C., “SBVLC: Secure Barcode-Based Visible Light Communication for Smartphones,” in Mobile Computing, IEEE Transactions on, vol. 15, no. 2, pp. 432-446, Feb. 2016. doi:10.1109/TMC.2015.2413791
Abstract: 2D barcodes have enjoyed a significant penetration rate in mobile applications. This is largely due to the extremely low barrier to adoption – almost every camera-enabled smartphone can scan 2D barcodes. As an alternative to NFC technology, 2D barcodes have been increasingly used for security-sensitive mobile applications including mobile payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. Due to the visual nature, 2D barcodes are subject to eavesdropping when they are displayed on the smartphone screens. On the other hand, the fundamental design principles of 2D barcodes make it difficult to add security features. In this paper, we propose SBVLC - a secure system for barcode-based visible light communication (VLC) between smartphones. We formally analyze the security of SBVLC based on geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen view angles and leveraging user-induced motions. We then develop three secure data exchange schemes that encode information in barcode streams. These schemes are useful in many security-sensitive mobile applications including private information sharing, secure device pairing, and contactless payment. SBVLC is evaluated through extensive experiments on both Android and iOS smartphones.
Keywords: Cameras; Mobile communication; Receivers; Security; Smart phones; Solid modeling; Three-dimensional displays; 2D barcode streaming; QR codes; Short-range smartphone communication; key exchange; secure VLC (ID#: 15-7407)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061506&isnumber=4358975

 

Qian Wang; Man Zhou; Kui Ren; Tao Lei; Jikun Li; Zhibo Wang, “Rain Bar: Robust Application-Driven Visual Communication Using Color Barcodes,” in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, vol., no., pp. 537–546, June 29 2015–July 2 2015. doi:10.1109/ICDCS.2015.61
Abstract: Color barcode-based visible light communication (VLC) over screen-camera links has attracted great research interest in recent years due to its many desirable properties, including free of charge, free of interference, free of complex network configuration and well-controlled communication security. To achieve high-throughput barcode streaming, previous systems separately address design challenges such as image blur, imperfect frame synchronization and error correction etc., without being investigated as an interrelated whole. This does not fully exploit the capacity of color barcode streaming, and these solutions all have their own limitations from a practical perspective. This paper proposes RainBar, a new and improved color barcode-based visual communication system, which features a carefully-designed high-capacity barcode layout design to allow flexible frame synchronization and accurate code extraction. A progressive code locator detection and localization scheme and a robust color recognition scheme are proposed to enhance system robustness and hence the decoding rate under various working conditions. An extensive experimental study is presented to demonstrate the effectiveness and flexibility of RainBar. Results on Android smartphones show that our system achieves higher average throughput than previous systems, under various working environments.
Keywords: bar codes; cameras; decoding; image colour analysis; optical communication; smart phones; synchronisation; telecommunication security; visual communication; Android smartphone; RainBar; color barcode-based visible light communication; flexible frame synchronization; high-throughput barcode streaming; progressive code locator detection; robust color recognition scheme; screen-camera link; Bars; Image color analysis; Receivers; Robustness; Smart phones; Streaming media; Synchronization; Visible light communication; color barcode; robustness; screen-camera link; smartphones (ID#: 15-7408)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164939&isnumber=7164877

 

Mostafa, A.; Lampe, L., “Enhancing the Security of VLC Links: Physical-Layer Approaches,” in Summer Topicals Meeting Series (SUM), 2015, vol., no., pp. 39–40, 13–15 July 2015. doi:10.1109/PHOSST.2015.7248182
Abstract: Visible light communication (VLC) channels are often perceived as eavesdropping-proof. However, that might not be the case in public areas or multiple-user scenarios. To overcome this limitation, we consider physical-layer security for VLC links.
Keywords: optical communication; telecommunication security; VLC link security; eavesdropping proof; multiple user scenarios; physical layer approach; physical layer security; visible light communication channels; Array signal processing; Jamming; Light emitting diodes; Receivers; Robustness; Security; Uncertainty; Visible light communication; amplitude constraint; friendly jamming; massive LED arrays; physical-layer security; robust beamforming; secrecy capacity bounds (ID#: 15-7409)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248182&isnumber=7248161

 

Jianwei Niu; Fei Gu; Ruogu Zhou; Guoliang Xing; Wei Xiang, “VINCE: Exploiting Visible Light Sensing for Smartphone-Based NFC Systems,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2722–2730,
April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218664
Abstract: This paper presents VINCE—a novel visible light sensing design for smartphone-based Near Field Communication (NFC) systems. VINCE encodes information as different brightness levels of smartphone screens, while receivers capture the light signal via light sensors. In contrast to RF technologies, the direction and distance of such a Visible Light Communication (VLC) link can be easily controlled, preserving communication privacy and security. As a result, VINCE can be used in a wide range of NFC applications such as contactless payments and device pairing. We experimentally profile the impact of screen brightness levels and refresh rates of smartphones, and then use the results to guide the design of light intensity encoding scheme of VINCE. We adopt several signal processing techniques and empirically derive a model to deal with the significant variation of received light intensity caused by noises and low screen refresh rates. To improve the communication reliability, VINCE adopts a feedback-based retransmission scheme, and dynamically adjusts the number of encoding brightness levels based on the current light channel condition. We also derive an analytical model that characterizes the relation among the distance, SNR (Signal to Noise Ratio), and BER (Bit Error Rate) of VINCE. Our design and theoretical model are validated via extensive evaluations using a hardware implementation of VINCE on Android smartphones and the Arduino platform.
Keywords: near-field communication; optical communication; smart phones; Android smartphones; Arduino platform; VINCE; near field communication systems; signal processing techniques; smartphone-based NFC systems; visible light communication; visible light sensing; Brightness; Decoding; Encoding; Receivers; Sensors; Signal to noise ratio (ID#: 15-7410)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218664&isnumber=7218353

 

Boubakri, W.; Abdallah, W.; Boudriga, N., “A Light-Based Communication Architecture for Smart City Applications,” in Transparent Optical Networks (ICTON), 2015 17th International Conference on, vol., no., pp. 1–6, 5–9 July 2015. doi:10.1109/ICTON.2015.7193409
Abstract: The main objective of this paper is to build a communication architecture to enable seamless integration of the light-based technology in the smart cities infrastructure. Specifically, we propose the deployment of optical nodes in locations of interest for smart city applications (such as road intersections, lighting systems, and signalling equipment) in order to enable the integration of sensing, tracking, and communication services. The proposed network architecture, built on these nodes, is structured into three layers. The first layer is based on the visible light communication (VLC) technology that will allow optical access to users and sensing of specific events and parameters. The second layer provides communications between different VLC LEDs (VLC Light-emitting Diodes) and specific sub-gateway. The third layer enables communication between different sub-gateways and the service gateway using free space optical (FSO) transmission. Furthermore, issues related to VLC cell dimensioning, wavelengths management, access control and technology integration are discussed and some alternatives are proposed. Finally, several smart cities applications such as intelligent communication, event surveillance, and object tracking are demonstrated.
Keywords: optical communication; FSO transmission; VLC LED; VLC light emitting diodes; VLC technology; access control; free space optical transmission; light based communication architecture; light based technology; lighting systems; optical nodes; road intersections; service gateway; signalling equipment; smart city applications; technology integration; visible light communication; wavelengths management; Computer architecture; Light emitting diodes; Lighting; Logic gates; Optical sensors; Roads; Smart cities; FSO; light-based communication (VLC); light-based sensing; smart city (ID#: 15-7411)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193409&isnumber=7193279

 

Mengjun Xie; Yanyan Li; Yoshigoe, Kenji.; Seker, Remzi; Jiang Bian, “CamAuth: Securing Web Authentication with Camera,”
in High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, vol., no., pp. 232–239, 8–10 Jan. 2015. doi:10.1109/HASE.2015.41
Abstract: Frequent outbreak of password database leaks and server breaches in recent years manifests the aggravated security problems of web authentication using only password. Two-factor authentication, despite being more secure and strongly promoted, has not been widely applied to web authentication. Leveraging the unprecedented popularity of both personal mobile devices (e.g., Smartphones) and barcode scans through camera, we explore a new horizon in the design space of two-factor authentication. In this paper, we present CamAuth, a web authentication scheme that exploits pervasive mobile devices and digital cameras to counter various password attacks including man-in-the-middle and phishing attacks. In CamAuth, a mobile device is used as the second authentication factor to vouch for the identity of a use who is performing a web login from a PC. The device communicates directly with the PC through the secure visible light communication channels, which incurs no cellular cost and is immune to radio frequency attacks. CamAuth employs public-key cryptography to ensure the security of authentication process. We implemented a prototype system of CamAuth that consists of an Android application, a Chrome browser extension, and a Java-based web server. Our evaluation results indicate that CamAuth is a viable scheme for enhancing the security of web authentication.
Keywords: Internet; authorisation; cameras; computer crime; message authentication; mobile computing; public key cryptography; smart phones; Android application; CamAuth; Chrome browser extension; Java-based Web server; Web authentication security; Web login; authentication process; barcode scans; database leaks; design space; digital cameras; man-in-the-middle attacks; password attacks; password outbreak; personal mobile devices; pervasive mobile devices; phishing attacks; public-key cryptography; radio frequency attacks; secure visible light communication channels; security problems; server breaches; smartphones; two-factor authentication; user identity; Authentication; Browsers; DH-HEMTs; Servers; Smart phones (ID#: 15-7412)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027436&isnumber=7027398
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Weaknesses 2015

 

 
SoS Logo

Weaknesses

2015


Attackers need only find one or a few exploitable vulnerabilities to mount a successful attack, while defenders must shore up as many weaknesses as practicable. The research presented here covers a range of weaknesses and approaches for identifying and securing against attacks. Many articles focus on key systems, both public and private. Hard problems addressed include human behavior, policy-based governance, resilience, and metrics. The work cited here was presented in 2015.



Schneider, J.; Romanowski, C.; Raj, R.K.; Mishra, S.; Stein, K., “Measurement of Locality Specific Resilience,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community’s operational capability to respond to stressors. As each locality is different-with unique risks, strengths, and weaknesses-the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 15-7200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225332&isnumber=7190491

 

Bowu Zhang; Jinho Hwang; Liran Ma.; Wood, Timothy, “Towards Security-Aware Virtual Server Migration Optimization to the Cloud,” in Autonomic Computing (ICAC), 2015 IEEE International Conference on, vol., no., pp. 71–80, 7–10 July 2015. doi:10.1109/ICAC.2015.45
Abstract: Cloud computing, featured by shared servers and location independent services, has been widely adopted by various businesses to increase computing efficiency, and reduce operational costs. Despite significant benefits and interests, enterprises have a hard time to decide whether or not to migrate thousands of servers into the cloud because of various reasons such as lack of holistic migration (planning) tools, concerns on data security and cloud vendor lock-in. In particular, cloud security has become the major concern for decision makers, due to the nature weakness of virtualization — the fact that the cloud allows multiple users to share resources through Internet-facing interfaces can be easily taken advantage of by hackers. Therefore, setting up a secure environment for resource migration becomes the top priority for both enterprises and cloud providers. To achieve the goal of security, security policies such as firewalls and access control have been widely adopted, leading to significant cost as additional resources need to employed. In this paper, we address the challenge of the security-aware virtual server migration, and propose a migration strategy that minimizes the migration cost while promising the security needs of enterprises. We prove that the proposed security-aware cost minimization problem is NP hard and our solution can achieve an approximate factor of 2. We perform an extensive simulation study to evaluate the performance of the proposed solution under various settings. Our simulation results demonstrate that our approach can save 53% moving cost for a single enterprise case, and 66% for multiple enterprises case comparing to a random migration strategy.
Keywords: cloud computing; cost reduction; resource allocation; security of data; virtualisation; Internet-facing interfaces; NP hard problem; cloud security; cloud vendor lock-in; data security; moving cost savings; resource migration; resource sharing; security policy; security-aware cost minimization problem; security-aware virtual server migration optimization; Approximation algorithms; Approximation methods; Cloud computing; Clustering algorithms; Home appliances; Security; Servers; Cloud Computing; Cloud Migration; Cloud Security; Cost Minimization (ID#: 15-7201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266936&isnumber=7266915

 

Suh-Lee, Candace; Juyeon Jo, “Quantifying Security Risk by Measuring Network Risk Conditions,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 9–14, June 28 2015–July 1 2015. doi:10.1109/ICIS.2015.7166562
Abstract: Software vulnerabilities are the weaknesses in the software that inadvertently allow dangerous operations. If the vulnerability is in a network service, it poses serious security threats because a cyber-attacker can exploit it to gain unauthorized access to the system. Hence, rapid discovery and remediation of network vulnerabilities is critical issues in network security. In today’s dynamic IT environment, it is common practice that an organization prioritizes the mitigation of discovered vulnerabilities according to their risk levels. Currently available technologies, however, associate each vulnerability to the static risk level which does not take the unique characteristics of the target network into account. This often leads to inaccurate risk prioritization and less-than-optimal resource allocation. In this research, we introduce a novel way of quantifying the risk of network vulnerability by augmenting the static risk level with conditions specific to the target network. The method calculates the risk value of each vulnerability by measuring the proximity to the untrusted network and risk of the neighboring hosts. The resulting risk value, RCR is a composite index of the individual risk, network location and neighborhood risk conditions. Thus, it can be effectively used for prioritization, comparison and trending. We tested the methodology through the network intrusion simulation. The results shows average 88.9% the correlation between RCR and number of successful attacks on each vulnerability.
Keywords: computer network security; resource allocation; risk management; RCR; cyber-attacker; dynamic IT environment; less-than-optimal resource allocation; network intrusion simulation; network location; network risk condition measurement; network security; network service; network vulnerability; risk prioritization; security risk quantification; security threats; software vulnerability; Internet; Organizations; Reliability; Security; Servers; Standards organizations; Workstations; quantitative risk analysis; useable security; vulnerability management (ID#: 15-7202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166562&isnumber=7166553

 

Xiao Chen; Liang Pang; Yuhuan Tang; Hongpeng Yang; Zhi Xue, “Security in MIMO Wireless Hybrid Channel with Artificial Noise,” in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, vol., no., pp. 1–4, 5–7 Aug. 2015. doi:10.1109/SSIC.2015.7245676
Abstract: Security is an important issue in the field of wireless channel. In this paper, the security problem of Gaussian MIMO wireless hybrid channel is considered where a transmitter with multiple antennas sends information to an intended receiver with one antenna in the presence of an eavesdropper with multiple antennas. Through utilizing some of the power to produce ‘artificial noise’, the transmitter can only degrade the eavesdropper’s channel to ensure the security of the communication. But there is an inherent weakness in this scheme. Then a Hybrid Blind Space Elimination (HBSE) scheme is proposed and proved to fix the design flaw in order to strengthen the original scheme.
Keywords: Gaussian channels; MIMO communication; wireless channels; Gaussian MIMO wireless hybrid channel; HBSE scheme; artificial noise; hybrid blind space elimination scheme; security problem; Communication system security; Noise; Receiving antennas; Security; Transmitting antennas; Wireless communication; HBSE; MIMO-WHC; secrecy capacity; wireless hybrid channel (ID#: 15-7203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245676&isnumber=7245317

 

Lychev, R.; Jero, S.; Boldyreva, A.; Nita-Rotaru, C., “How Secure and Quick is QUIC? Provable Security and Performance Analyses,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 214–231, 17–21 May 2015. doi:10.1109/SP.2015.21
Abstract: QUIC is a secure transport protocol developed by Google and implemented in Chrome in 2013, currently representing one of the most promising solutions to decreasing latency while intending to provide security properties similar with TLS. In this work we shed some light on QUIC’s strengths and weaknesses in terms of its provable security and performance guarantees in the presence of attackers. We first introduce a security model for analyzing performance-driven protocols like QUIC and prove that QUIC satisfies our definition under reasonable assumptions on the protocol’s building blocks. However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS, e.g., TLS-DHE. Our analyses also reveal that with simple bit-flipping and replay attacks on some public parameters exchanged during the handshake, an adversary could easily prevent QUIC from achieving minimal latency advantages either by having it fall back to TCP or by causing the client and server to have an inconsistent view of their handshake leading to a failure to complete the connection. We have implemented these attacks and demonstrated that they are practical. Our results suggest that QUIC’s security weaknesses are introduced by the very mechanisms used to reduce latency, which highlights the seemingly inherent trade off between minimizing latency and providing “good” security guarantees.
Keywords: client-server systems; computer network security; transport protocols; Chrome; Google; QUIC; TLS-DHE; bit-flipping; performance analysis; performance guarantee; provable security; secure transport protocol; Encryption; IP networks; Protocols; Public key; Servers (ID#: 15-7204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163028&isnumber=7163005

 

Almubark, A.; Hatanaka, N.; Uchida, O.; Ikeda, Y., “Identifying the Organizational Factors of Information Security Incidents,” in Computing Technology and Information Management (ICCTIM), 2015 Second International Conference on, vol., no., pp. 7–12, 21–23 April 2015. doi:10.1109/ICCTIM.2015.7224585
Abstract: Leakage of secret information have increasingly become a social problem. Information leaks typically target specified organizations or persons, considering the magnitude of risk involved in information security as a part of business activity. This paper aims to identify the causes of information leaks by applying the organization theory and the statistical analysis method to reveal the mechanism of information security incidents. Furthermore, the relationship between organizational objectives and social values is discussed in order to propose solutions to resolve organizational weakness.
Keywords: organisational aspects; security of data; social aspects of automation; statistical analysis; business activity; information leaks; information security incidents; organizational factors; organizational objectives; secret information; social values; statistical analysis method; Decision support systems; Information security; Organizations; Corporate Culture; Information Security Incidents (ID#: 15-7205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7224585&isnumber=7224583

 

Procházková, L.Ď.; Hromada, M., “The Security Risks Associated with Attacks on Soft Targets of State,” in Military Technologies (ICMT), 2015 International Conference on, vol., no., pp. 1–4, 19–21 May 2015. doi:10.1109/MILTECHS.2015.7153731
Abstract: The article will discuss the issue of attacks on soft targets of state. The theoretical part takes place to determine the theoretical knowledge and to define primary situations that emerge as weaknesses of attacks. The document analyses the situations which are closely linked to the attempted attack on a targets. In the article is the analysis of the causes that led to the attack. Analysis of the causes should define security vulnerabilities while should point to the possibility of introducing changes. There is analysis of the state of the global perspective, in term of several decades. Post primary analysis has attempted attacks on soft targets and points to a possible way of addressing the proposal of changes in that process.
Keywords: national security; risk analysis; post primary analysis; security risks; security vulnerability; soft target of state attack; Globalization; Proposals; Sociology; Statistics; Terrorism; Weapons; attack; attacker; reason; soft targets (ID#: 15-7206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153731&isnumber=7153638

 

Gawron, Marian; Cheng, Feng; Meinel, Christoph, “Automatic Detection of Vulnerabilities for Advanced Security Analytics,” in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, vol., no., pp. 471–474, 19–21 Aug. 2015. doi:10.1109/APNOMS.2015.7275369
Abstract: The detection of vulnerabilities in computer systems and computer networks as well as the weakness analysis are crucial problems. The presented method tackles the problem with an automated detection. For identifying vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. The conditional structure simulates requirements and impacts of each vulnerability. Thus an automated analytical function could detect security leaks on a target system based on this logical format. With this method it is possible to scan a system without much expertise, since the automated or computer-aided vulnerability detection does not require special knowledge about the target system. The gathered information is used to provide security advisories and enhanced diagnostics which could also detect attacks that exploit multiple vulnerabilities of the system.
Keywords: Browsers; Complexity theory; Data models; Databases; Operating systems; Security (ID#: 15-7207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275369&isnumber=7275336

 

Kassicieh, Sul; Lipinski, Valerie; Seazzu, Alessandro F., “Human Centric Cyber Security: What Are the New Trends in Data Protection?,” in Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, vol., no.,
pp. 1321–1338, 2–6 Aug. 2015. doi:10.1109/PICMET.2015.7273084
Abstract: The debate about the use of automated security measures versus training and awareness of people with access to data (such as employees) to protect sensitive and/or private information has been going on for some time. In this paper, we outline the thinking behind security, what hackers are trying to accomplish and the best ways of combating these efforts using the latest techniques that combine multiple lines of defense. Different major categories of automated security measures as well as major training and awareness techniques are discussed outlining strengths and weaknesses of each method.
Keywords: Companies; Computer crime; Media; Training (ID#: 15-7208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273084&isnumber=7272950

 

Akgün, Mete; Çağlayan, M.Ufuk, “Weaknesses of Two RFID Protocols Regarding De-Synchronization Attacks,” in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, vol., no., pp. 828–833, 24–28 Aug. 2015. doi:10.1109/IWCMC.2015.7289190
Abstract: Radio Frequency Identification (RFID) protocols should have a secret updating phase in order to protect the privacy of RFID tags against tag tracing attacks. In the literature, there are many lightweight RFID authentication protocols that try to provide key updating with lightweight cryptographic primitives. In this paper, we analyze the security of two recently proposed lightweight RFID authentication protocol against desynchronization attacks. We show that secret values shared between the back-end server and any given tag can be easily desynchronized. This weakness stems from the insufficient design of these protocols.
Keywords: Authentication; Privacy; Protocols; RFID tags; Servers; RFID; authentication; de-synchronization (ID#: 15-7209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289190&isnumber=7288920

 

Arden, O.; Liu, J.; Myers, A.C., “Flow-Limited Authorization,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, vol., no., pp. 569–583, 13–17 July 2015. doi:10.1109/CSF.2015.42
Abstract: Because information flow control mechanisms often rely on an underlying authorization mechanism, their security guarantees can be subverted by weaknesses in authorization. Conversely, the security of authorization can be subverted by information flows that leak information or that influence how authority is delegated between principals. We argue that interactions between information flow and authorization create security vulnerabilities that have not been fully identified or addressed in prior work. We explore how the security of decentralized information flow control (DIFC) is affected by three aspects of its underlying authorization mechanism: first, delegation of authority between principals, second, revocation of previously delegated authority, third, information flows created by the authorization mechanisms themselves. It is no surprise that revocation poses challenges, but we show that even delegation is problematic because it enables unauthorized downgrading. Our solution is a new security model, the Flow-Limited Authorization Model (FLAM), which offers a new, integrated approach to authorization and information flow control. FLAM ensures robust authorization, a novel security condition for authorization queries that ensures attackers cannot influence authorization decisions or learn confidential trust relationships. We discuss our prototype implementation and its algorithm for proof search.
Keywords: authorisation; FLAM; authorization queries; confidential trust relationships; decentralized information flow control; flow-limited authorization model; proof search; robust authorization; security condition; security model; security vulnerabilities; Authorization; Buildings; Cognition; Fabrics; Lattices Robustness; access control; authorization logic; distributed systems; dynamic policies; information flow control; language-based security; security; trust management (ID#: 15-7210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243755&isnumber=7243713

 

Kapur, P.K.; Yadavali, V.S.S.; Shrivastava, A.K., “A Comparative Study of Vulnerability Discovery Modeling and Software Reliability Growth Modeling,” in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 246–251, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7155000
Abstract: Technological advancements are achieving greater heights with each passing day. Information technology is one of the area in which is developing at an agile pace. It has evolved in such a way that we all are interconnected through some medium viz. Internet, telecommunication etc. Technical advancements have grown enough to affect everyone’s day to day life. With this increasing dependency on software systems the issue of being secure is a big challenge. This security problem is becoming critical due to the presence of bad guys and attracted a lot of researchers towards identifying major attributes of security. One of the security attribute considered in this paper is software vulnerability. Software security vulnerability is a weakness in a software product that could allow an attacker to compromise the integrity, availability, or confidentiality of that product. In past, Vulnerabilities have been reported in the various operating systems. In order to mitigate the risk associated with these vulnerabilities both the developers as well as the users have to utilize their significant resources. Recently few researchers have shown their interest in investigating the potential number of vulnerabilities in the software by applying quantitative approach. In this paper we analytically describe existing models and compare it with our proposed models by evaluating these models using actual data for various software systems. Our proposed models capture the discovery process relatively better than the existing discovery models. Further it has also been shown that some of the existing SRGM can also be used for predicting security vulnerabilities in software.
Keywords: program verification; risk management; security of data; software reliability; Internet; SRGM; information technology; model evaluation; product availability; product confidentiality; product integrity; quantitative approach; risk mitigatation; security attributes; security problem; software product; software reliability growth modeling; software security vulnerability prediction; software system dependency; technical advancement; technological advancement; telecommunication; vulnerability discovery modeling; Analytical models; Computational modeling; Mathematical model; Security; Software reliability; Software systems; Non Homogeneous Poisson Process (NHPP); Software Reliability Growth Model (SRGM); Software Security; Vulnerability; Vulnerability Discovery Model (VDM) (ID#: 15-7211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155000&isnumber=7154914

 

D’Lima, N.; Mittal, J., “Password Authentication Using Keystroke Biometrics,” in Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, vol., no., pp. 1–6, 15–17 Jan. 2015. doi:10.1109/ICCICT.2015.7045681
Abstract: The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user’s behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
Keywords: authorisation; biometrics (access control); neural nets; pattern matching; artificial neural network; error rates; keystroke biometrics; password authentication; password security; robust security technique; typing pattern matching; user behavior; user natural typing pattern; user verification mechanism; Classification algorithms; Error analysis; Europe; Hardware; Monitoring; Support vector machines; Text recognition; Artificial Neural Networks; Authentication; Keystroke Biometrics; Password; Security
(ID#: 15-7212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045681&isnumber=7045627

 

Maler, Eve, “Extending the Power of Consent with User-Managed Access: A Standard Architecture for Asynchronous, Centralizable, Internet-Scalable Consent,” in Security and Privacy Workshops (SPW), 2015 IEEE,  vol., no., pp. 175–179, 21–22 May 2015. doi:10.1109/SPW.2015.34
Abstract: The inherent weaknesses of existing notice-and-consent paradigms of data privacy are becoming clear, not just to privacy practitioners but to ordinary online users as well. The corporate privacy function is a maturing discipline, but greater maturity often equates just to greater regulatory compliance. At a time when many users are disturbed by the status quo, new trends in web security and data sharing are demonstrating useful new consent paradigms. Benefiting from these trends, the emerging standard User-Managed Access (UMA) allows apps to extend the power of consent. UMA corrects a power imbalance that favors companies over individuals, enabling privacy solutions that move beyond compliance.
Keywords: Internet; authorisation; data privacy; Internet-scalable consent; UMA; Web security; asynchronous consent; centralizable consent; corporate privacy function; data sharing; notice-and-consent paradigms; user-managed access; Authorization; Automation; Data privacy; Market research; Privacy; Servers; Standards; privacy; consent; authorization; permission; access control; security; personal data; digital identity; Internet of Things (ID#: 15-7213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163222&isnumber=7163193

 

Zeb, K.; Baig, O.; Asif, M.K., “DDoS Attacks and Countermeasures in Cyberspace,” in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, vol., no., pp. 1–6, 21–23 March 2015. doi:10.1109/WSWAN.2015.7210322
Abstract: In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.
Keywords: Internet; computer network security; transport protocols; DDoS attack mechanisms; Internet; TCP-IP model; computer networks; cyber security; cyberspace; distributed denial of service attacks; Computer crime; Filtering; Floods; IP networks; Internet; Protocols; Servers; Cyber security; Cyber-attack; Cyberspace; DDoS Defense; DDoS attack; Mitigation; Vulnerability (ID#: 15-7214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210322&isnumber=7209078

 

Lounis, O.; Malika, B., “A New Vision for Intrusion Detection System in Information Systems,” in Science and Information Conference (SAI), 2015, vol., no., pp. 1352–1356, 28–30 July 2015. doi:10.1109/SAI.2015.7237318
Abstract: In recent years, information systems have seen an amazing increase in attacks. Intrusion detection systems have become the mainstream of information assurance. While firewalls and the two basic systems of cryptography (symmetric and asymmetric) do provide some protection, they do not provide complete protection and still need to be supplemented by an intrusion detection system. Most of the work done on the IDS is based on two approaches; the anomaly approach and misuse approach. Each of these approaches whether they are implemented in HIDS or NIDS have weaknesses. To respond these limitations, we propose a new way of seeing in intrusion detection systems. This vision can be described as follows: “Instead of taking and analyzing each attack separately one from the other (have several signature for each type of attack knowing that there is various attacks and several variant of these attacks) or, instead of analyzing log files of the system, so why not see the consequences of these attacks and try to ensure that the security properties affected by these attacks will not be compromise”. To do so, we will take the language which is realized by Jonathan Rouzauld Cornabas to modelize the system’s entities to protect. This paper represents only the idea on which we will base on, in order to design an effective IDS in the operating system running in user space.
Keywords: cryptography; firewalls; information systems; operating systems (computers); IDS; anomaly approach; information assurance; intrusion detection system; misuse approach; operating system; security properties; Access control; Computational modeling; Computers; Databases; Intrusion detection; Operating systems; realtime system; security (ID#: 15-7215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237318&isnumber=7237120

 

Ahmadi, Mohammad; Chizari, Milad; Eslami, Mohammad; Golkar, Mohammad Javad; Vali, Mostafa, “Access Control and User Authentication Concerns in Cloud Computing Environments,” in Telematics and Future Generation Networks (TAFGEN), 2015 1st International Conference on, vol., no., pp. 39–43, 26–28 May 2015. doi:10.1109/TAFGEN.2015.7289572
Abstract: Cloud computing is a newfound service that has a rapid growth in IT industry during recent years. Despite the several advantages of this technology there are some issues such as security and privacy that affect the reliability of cloud computing models. Access control and user authentication are the most important security issues in cloud computing. Therefore, the research has been prepared to provide the overall information about this security concerns and specific details about the identified issues in access control and user authentication researches. Therefore, cloud computing benefits and disadvantages have been explained in the first part. The second part reviewed some of access control and user authentication algorithms and identifying benefits and weaknesses of each algorithm. The main aim of this survey is considering limitations and problems of previous research in the research area to find out the most challenging issue in access control and user authentication algorithms.
Keywords: Access control; Authentication; Cloud computing; Computational modeling; Encryption; Servers; Access Control; Cloud Computing; Privacy; Security; User Authentication (ID#: 15-7216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7289572&isnumber=7289553

 

Chakhchoukh, Y.; Ishii, H., “Cyber Attacks Scenarios on the Measurement Function of Power State Estimation,” in American Control Conference (ACC), 2015, pp. 3676–3681, 1–3 July 2015. doi:10.1109/ACC.2015.7171901
Abstract: Cyber security provided by robust power systems state estimation (SE) methods is evaluated. Tools and estimators developed in robust statistics theory are considered. The least trimmed squares (LTS) based diagnostic is studied. The impact of cyber attacks on the Jacobian matrix or observation function, which generate coordinated outliers known as leverage points, is assessed. Two scenarios of attacks are proposed where the first scenario generates a masked attack resulting in a contaminated state uncontrolled by the intruder. The second scenario leads to a stealthy attack with an estimated targeted state fixed by the same intruder. Intervals for the necessary number of attacks for each scenario and their positions are shown. Theoretical derivations based on a projection framework highlights the conditions that minimize detection with robust SE approaches developed from the regression model assumption. More specifically, affine equivariant robust estimators present a weakness towards such intrusions. Simulations on IEEE power system test beds illustrate the behavior of the robust LTS with decomposition and the popular detection methods analyzing the weighted least squares (WLS) residuals when subject to both scenarios’ attacks.
Keywords: least mean squares methods; power system state estimation; regression analysis; IEEE power system test beds; Jacobian matrix; affine equivariant robust estimators; coordinated outliers; cyber attacks scenarios; cyber security; least trimmed squares based diagnostic; leverage points; observation function; regression model assumption; robust power systems state estimation; robust statistics theory; stealthy attack; weighted least squares residuals; Electric breakdown; Jacobian matrices; Least squares approximations; Power systems; Redundancy; Robustness; Topology (ID#: 15-7217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171901&isnumber=7170700

 

Ergün, Salih, “Cryptanalysis of a Double Scroll Based ‘True’ Random Bit Generator,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, vol., no., pp. 1–4, 2–5 Aug. 2015. doi:10.1109/MWSCAS.2015.7282066
Abstract: An algebraic cryptanalysis of a “true” random bit generator (RBG) based on a double-scroll attractor is provided. An attack system is proposed to analyze the security weaknesses of the RBG. Convergence of the attack system is proved using synchronization of chaotic systems with unknown parameters called auto-synchronization. All secret parameters of the RBG are recovered from a scalar time series using auto-synchronization where the other information available are the structure of the RBG and output bit sequence obtained from the RBG. Simulation and numerical results verifying the feasibility of the attack system are given. The RBG doesn’t fulfill NIST-800-22 statistical test suite, the next bit can be predicted, while the same output bit stream of the RBG can be reproduced.
Keywords: Chaotic communication; Generators; Oscillators; Random number generation; Synchronization (ID#: 15-7218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282066&isnumber=7281994

 

Naderi-Afooshteh, Abbas; Nguyen-Tuong, Anh; Bagheri-Marzijarani, Mandana; Hiser, Jason D.; Davidson, Jack W., “Joza: Hybrid Taint Inference for Defeating Web Application SQL Injection Attacks,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 172–183, 22–25 June 2015. doi:10.1109/DSN.2015.13
Abstract: Despite years of research on taint-tracking techniques to detect SQL injection attacks, taint tracking is rarely used in practice because it suffers from high performance overhead, intrusive instrumentation, and other deployment issues. Taint inference techniques address these shortcomings by obviating the need to track the flow of data during program execution by inferring markings based on either the program’s input (negative taint inference), or the program itself (positive taint inference). We show that existing taint inference techniques are insecure by developing new attacks that exploit inherent weaknesses of the inferencing process. To address these exposed weaknesses, we developed Joza, a novel hybrid taint inference approach that exploits the complementary nature of negative and positive taint inference to mitigate their respective weaknesses. Our evaluation shows that Joza prevents real-world SQL injection attacks, exhibits no false positives, incurs low performance overhead (4%), and is easy to deploy.
Keywords: Approximation algorithms; Databases; Encoding; Inference algorithms; Optimization; Payloads; Security; SQL injection; Taint inference; Taint tracking; Web application security (ID#: 15-7219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266848&isnumber=7266818

 

Aditya, S.; Mittal, V., “Multi-Layered Crypto Cloud Integration of oPass,” in Computer Communication and Informatics (ICCCI), 2015 International Conference on, vol., no., pp. 1–7, 8–10 Jan. 2015. doi:10.1109/ICCCI.2015.7218114
Abstract: One of the most popular forms of user authentication is the Text Passwords. It is due to its convenience and simplicity. Still, the passwords are susceptible to be taken and compromised under various threats and weaknesses. In order to overcome these problems, a protocol called oPass was proposed. A cryptanalysis of it was done. We found out four kinds of attacks which could be done on it i.e. Use of SMS service, Attacks on oPass communication links, Unauthorized intruder access using the master password, Network attacks on untrusted web browser. One of them was Impersonation of the User. In order to overcome these problems in cloud environment, a protocol is proposed based on oPass to implement multi-layer crypto-cloud integration with oPass which can handle this kind of attack.
Keywords: cloud computing; cryptography; SMS service; Short Messaging Service; cloud environment; cryptanalysis; master password; multilayered crypto cloud integration; oPass communication links; oPass protocol; text password; user authentication; user impersonation; Authentication; Cloud computing; Encryption; Protocols; Servers; Cloud; Digital Signature; Impersonation; Network Security; RSA; SMS; oPass (ID#: 15-7220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218114&isnumber=7218046

 

You, I.; Leu, F., “Comments on ‘SPAM: A Secure Password Authentication Mechanism for Seamless Handover in Proxy Mobile IPv6 Networks’,” in Systems Journal, IEEE, vol. PP, no. 99, pp.1–4. doi:10.1109/JSYST.2015.2477415
Abstract: Recently, Chuang et al. have introduced a secure password authentication mechanism for seamless handover in Proxy Mobile IPv6 (SPAM). SPAM aimed at providing high-security properties while optimizing the handover latency and the computation overhead. However, it is still vulnerable to replay and malicious insider attacks, as well as the compromise of a single node. This paper formally and precisely analyzes SPAM based on the Burrows–Abadi–Needham logic, followed by its weaknesses and related attacks.
Keywords: Authentication; Handover; Manganese; Mobile communication; Unsolicited electronic mail; Burrows–Abadi–Needham (BAN) logic; Proxy Mobile IPv6 (PMIPv6); fast handover security; formal security analysis (ID#: 15-7221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286744&isnumber=4357939

 

Eldib, H.; Chao Wang; Taha, M.; Schaumont, P., “Quantitative Masking Strength: Quantifying the Power Side-Channel Resistance of Software Code,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 34, no.10, pp.1558–1568, Oct. 2015. doi:10.1109/TCAD.2015.2424951
Abstract: Many commercial systems in the embedded space have shown weakness against power analysis-based side-channel attacks in recent years. Random masking is a commonly used technique for removing the statistical dependency between the sensitive data and the side-channel information. However, the process of designing masking countermeasures is both labor intensive and error prone. Furthermore, there is a lack of formal methods for quantifying the actual strength of a countermeasure implementation. Security design errors may therefore go undetected until the side-channel leakage is physically measured and evaluated. We show a better solution based on static analysis of C source code. We introduce the new notion of quantitative masking strength (QMS) to estimate the amount of information leakage from software through side channels. Once the user has identified the sensitive variables, the QMS can be automatically computed from the source code of a countermeasure implementation. Our experiments, based on measurement on real devices, show that the QMS accurately reflects the side-channel resistance of the software implementation.
Keywords: safety-critical software; security of data; source code (software); statistical analysis; C source code static analysis; QMS; formal methods; power analysis-based side-channel attacks; quantitative masking strength; random masking; security design errors; side-channel information; side-channel leakage; software code power side-channel resistance; statistical dependency; Algorithm design and analysis; Analytical models; Cryptography; Random variables; Resistance; Software; Software algorithms; Countermeasure; Verification; countermeasure; differential power analysis; differential power analysis (DPA); quantitative masking strength; quantitative masking strength (QMS); satisfiability modulo theory (SMT) solver; security; verification (ID#: 15-7222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7090986&isnumber=7271134

 

Douziech, P.-E.; Curtis, B., “Cross-Technology, Cross-Layer Defect Detection in IT Systems — Challenges and Achievements,” in Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, vol., no., pp. 21–26, 23–23 May 2015. doi:10.1109/COUFLESS.2015.11
Abstract: Although critical for delivering resilient, secure, efficient, and easily changed IT systems, cross-technology, cross-layer quality defect detection in IT systems still faces hurdles. Two hurdles involve the absence of an absolute target architecture and the difficulty of apprehending multi-component anti-patterns. However, Static analysis and measurement technologies are now able to both consume contextual input and detect system-level anti-patterns. This paper will provide several examples of the information required to detect system-level anti-patterns using examples from the Common Weakness Enumeration repository maintained by MITRE Corp.
Keywords: program diagnostics; program testing; software architecture; software quality; IT systems; MITRE Corp; common weakness enumeration repository; cross-layer quality defect detection; cross-technology defect detection; measurement technologies; multicomponent antipatterns; static analysis; system-level antipattern detection; Computer architecture; Java; Organizations; Reliability; Security; Software; Software measurement; CWE; IT systems; software anti-patterns; software pattern detection; software quality measures; structural quality (ID#: 15-7223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181478&isnumber=7181467

 

Monica Catherine S; George, Geogen, “S-Compiler: A Code Vulnerability Detection Method,” in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, vol., no., pp. 1–4, 24–25 Jan. 2015. doi:10.1109/EESCO.2015.7254018
Abstract: Nowadays, security breaches are greatly increasing in number. This is one of the major threats that are being faced by most organisations which usually lead to a massive loss. The major cause for these breaches could potentially be the vulnerabilities in software products. There are many tools available to detect such vulnerabilities but detection and correction of vulnerabilities during development phase would be more beneficial. Though there are many standard secure coding practices to be followed in development phase, software developers fail to utilize them and this leads to an unsecured end product. The difficulty in manual analysis of vulnerabilities in source code is what leads to the evolution of automated analysis tools. Static and dynamic analyses are the two complementary methods used to detect vulnerabilities in development phase. Static analysis scans the source code which eliminates the need of execution of the code but it has many false positives and false negatives. On the other hand, dynamic analysis tests the code by running it along with the test cases. The proposed approach integrates static and dynamic analysis. This eliminates the false positives and false negatives problem of the existing practices and helps developers to correct their code in the most efficient way. It deals with common buffer overflow vulnerabilities and vulnerabilities from Common Weakness Enumeration (CWE). The whole scenario is implemented as a web interface.
Keywords: source coding; telecommunication security; S-compiler; automated analysis tools; code vulnerability detection method; common weakness enumeration; false negatives; false positives; source code; Buffer overflows; Buffer storage; Encoding; Forensics; Information security; Software; Buffer overflow; Dynamic analysis; Secure coding; Static analysis (ID#: 15-7224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7254018&isnumber=7253613

 

Kaynar, K.; Sivrikaya, F., “Distributed Attack Graph Generation,” in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, Apr. 2015, pp. 1-1. doi:10.1109/TDSC.2015.2423682
Abstract: Attack graphs show possible paths that an attacker can use to intrude into a target network and gain privileges through series of vulnerability exploits. The computation of attack graphs suffers from the state explosion problem occurring most notably when the number of vulnerabilities in the target network grows large. Parallel computation of attack graphs can be utilized to attenuate this problem. When employed in online network security evaluation, the computation of attack graphs can be triggered with the correlated intrusion alerts received from sensors scattered throughout the target network. In such cases, distributed computation of attack graphs becomes valuable. This article introduces a parallel and distributed memory-based algorithm that builds vulnerability-based attack graphs on a distributed multi-agent platform. A virtual shared memory abstraction is proposed to be used over such a platform, whose memory pages are initialized by partitioning the network reachability information. We demonstrate the feasibility of parallel distributed computation of attack graphs and show that even a small degree of parallelism can effectively speed up the generation process as the problem size grows. We also introduce a rich attack template and network model in order to form chains of vulnerability exploits in attack graphs more precisely.
Keywords: Buildings; Computational modeling; Databases; Explosions; Search problems; Security; Software; attack graph; distributed computing; exploit; reachability; vulnerability; weakness (ID#: 15-7225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087377&isnumber=4358699

 

Kumar, K.S.; Chanamala, R.; Sahoo, S.R.; Mahapatra, K.K., “An Improved AES Hardware Trojan Benchmark to Validate Trojan Detection Schemes in an ASIC Design Flow,” in VLSI Design and Test (VDAT), 2015 19th International Symposium on, vol., no., pp. 1–6, 26–29 June 2015. doi:10.1109/ISVDAT.2015.7208064
Abstract: The semiconductor design industry has globalized and it is economical for the chip makers to get services from the different geographies in design, manufacturing and testing. Globalization raises the question of trust in an integrated circuit. It is for the every chip maker to ensure there is no malicious inclusion in the design, which is referred as Hardware Trojans. Malicious inclusion can occur by an in-house adversary design engineer, Intellectual Property (IP) core supplied from the third party vendor or at untrusted manufacturing foundry. Several researchers have proposed hardware Trojan detection schemes in the recent years. Trust-Hub provides Trojan benchmark circuits to verify the strength of the Trojan detection techniques. In this work, our focus is on Advanced Encryption Standard (AES) Trojan benchmarks, which is most vulnerable block cipher for Trojan attacks. All 21 Benchmarks available in Trusthub are analyzed against standard coverage driven verification practices, synthesis, DFT insertion and ATPG simulations. The analysis reveals that 19 AES benchmarks are weak and Trojan inclusion can be detected using standard procedures used in ASIC design flow. Based on the weakness observed design modification is proposed to improve the quality of Trojan benchmarks. The strength of proposed Trojan benchmarks is better than existing circuits and their original features are also preserved after design modification.
Keywords: application specific integrated circuits; cryptography; integrated circuit design; AES hardware Trojan benchmark; ASIC design flow; Trojan detection schemes; advanced encryption standard; intellectual property core; malicious inclusion; Benchmark testing; Discrete Fourier transforms; Hardware; Leakage currents; Logic gates; Shift registers; Trojan horses; AES; ASIC; Hardware Trojan; Security; Trust-Hub (ID#: 15-7226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7208064&isnumber=7208044


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Work Factor Metrics 2015

 

 
SoS Logo

Work Factor Metrics

2015


It is difficult to measure the relative strengths and weaknesses of modern information systems when the safety, security, and reliability of those systems must be protected. Developers often apply security to systems without the ability to evaluate the impact of those mechanisms to the overall system. Few efforts are directed at actually measuring the quantifiable impact of information assurance technology on the potential adversary. The research cited here describes analytic tools, methods, and processes for measuring and evaluating software, networks, and authentication. The work cited here was presented in 2015.
 



Murphy, David; Darabi, Hooman; Hao Wu, “25.3 A VCO with Implicit Common-Mode Resonance,” in Solid- State Circuits Conference — (ISSCC), 2015 IEEE International, vol., no., pp. 1–3, 22–26 Feb. 2015. doi:10.1109/ISSCC.2015.7063116
Abstract: CMOS VCO performance metrics have not improved significantly over the last decade. Indeed, the best VCO Figure of Merit (FOM) currently reported was published by Hegazi back in 2001 [1]. That topology, shown in Fig. 25.3.1(a), employs a second resonant tank at the source terminals of the differential pair that is tuned to twice the LO frequency (FLO). The additional tank provides a high common-mode impedance at 2×FLO, which prevents the differential pair transistors from conducting in triode and thus prevents the degradation of the oscillator’s quality factor (Q). As a consequence, the topology can achieve an oscillator noise factor (F)-defined as the ratio of the total oscillator noise to the noise contributed by the tank- of just below 2, which is equal to the fundamental limit of a cross-coupled LC CMOS oscillator [2]. There are, however, a few drawbacks of Hegazi’s VCO: (1) the additional area required for the tail inductor, (2) the routing complexity demanded of the tail inductor, which can degrade its Q and limit its effectiveness, and (3) for oscillators with wide tuning ranges, the need to independently tune the second inductor, which again can degrade its Q. Moreover, it can be shown that the common-mode impedance of the main tank at 2×FLO also has a significant effect on the oscillator’s performance, which if not properly modeled can lead to disagreement between simulation and measurement, particularly in terms of the flicker noise corner. To mitigate these issues, this work introduces a new oscillator topology that resonates the common-mode of the circuit at 2×FLO, but does not require an additional inductor.
Keywords: CMOS integrated circuits; network topology; voltage-controlled oscillators; CMOS VCO; differential pair transistors; figure of merit; implicit common-mode resonance; oscillator noise factor; oscillator topology; tail inductor;1f noise; Inductors; Phase noise; Voltage-controlled oscillators (ID#: 15-7457)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7063116&isnumber=7062838

 

Jie Li; Veeraraghavan, M.; Emmerson, S.; Russell, R.D., “File Multicast Transport Protocol (FMTP),” in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, vol., no., pp. 1037–1046, 4–7 May 2015. doi:10.1109/CCGrid.2015.121
Abstract: This paper describes a new reliable transport protocol designed to run on top of a multicast network service for delivery of continuously generated files. The motivation for this work is to support scientific computing Grid applications that require file transfers between geographically distributed data enters. For example, atmospheric research scientists at various universities subscribe to real-time meteorology data that is being distributed by the University Corporation for Atmospheric Research (UCAR). UCAR delivers 30 different feed types, such as radar data and satellite imagery, to over 240 institutions. The current solution uses an application-layer (AL) multicast tree with uncast TCP connections between the AL servers. Recently, Internet2 and other research-and-education networks have deployed a Layer-2 service using OpenFlow/Software Defined Network (SDN) technologies. Our new transport protocol, FMTP, is designed to run on top of a multipoint Layer-2 topology. A key design aspect of FMTP is the tradeoffs between file delivery throughput of fast receivers and robustness (measure of successful reception) of slow receivers. A configurable parameter, called the retransmission timeout factor, is used to trade off these two metrics. In a multicast setting, it is difficult to achieve full reliability without sacrificing throughput under moderate-to-high loads, and throughput is important in scientific computing grids. A backup solution allows receivers to use uncast TCP connections to request files that were not received completely via multicast. For a given load and a multicast group of 30 receivers, robustness increased significantly from 81.4 to 97.5% when the retransmission timeout factor was increased from 10 to 50 with a small drop in average throughput from 85 to 82.8 Mbps.
Keywords: geophysics computing; grid computing; multicast protocols; software defined networking; telecommunication network topology; transport protocols; AL multicast tree; AL servers; FMTP; Internet2; OpenFlow technology; SDN technology; UCAR; University Corporation for Atmospheric Research; application-layer multicast tree; atmospheric research scientists; configurable parameter; continuously generated file delivery; fast receivers; file multicast transport protocol; file request; file-delivery throughput; geographically distributed datacenters; moderate-to-high loads; multicast network service; multipoint Layer-2 topology; radar data; real-time meteorology data; research-and-education networks; retransmission timeout factor; satellite imagery; scientific computing grid applications; slow receivers; software defined network technology; unicast TCP connections; Multicast communication; Network topology; Receivers; Reliability; Throughput; Transport protocols; Unicast; Data distribution in scientific grids; interdatacenter file movement; reliable multicast; transport protocols (ID#: 15-7458)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152590&isnumber=7152455

 

Thuseethan, S.; Vasanthapriyan, S., “Spider-Web Topology: A Novel Topology for Parallel and Distributed Computing,” in Information and Communication Technology (ICoICT ), 2015 3rd International Conference on, vol., no., pp. 34–38, 27–29 May 2015. doi:10.1109/ICoICT.2015.7231392
Abstract: This paper is mainly concerned with the static interconnection network, its topological properties and metrics, particularly for exiting topologies and proposed one. The interconnection network topology is a key factor in determining the characteristics of parallel computers; suitable topology provides efficiency increment while performing tasks. In the recent years, there are numerous topologies available with various characteristics need to be improved. In this research we analyzed existing static interconnection topologies and developed a novel topology by minimizing some degradation factors of topological properties. A novel topology, Spider-web topology is proposed and shows a considerable advantage over the existing topologies. Further, one of the major aims of this work is to do a comparative study of the existing static interconnection networks with this novel topology by analyzing the properties and metrics. Both theoretical-based and experimental-based comparison conducted here shows that the proposed topology is able to perform better than the existing topologies.
Keywords: interconnected systems; parallel processing; topology; distributed computing; experimental-based comparison; interconnection network topology; parallel computing; spider-web topology; static interconnection network; theoretical-based comparison; Computers; Multiprocessor interconnection; Network topology; Parallel processing; Routing; Sorting; Topology; Interconnection Network; Parallel Computing; Topology (ID#: 15-7459)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231392&isnumber=7231384

 

Tsai, T.J.; Friedland, G.; Anguera, X., “An Information-Theoretic Metric of Fingerprint Effectiveness,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 340–344, 19–24 April 2015. doi:10.1109/ICASSP.2015.7177987
Abstract: Audio fingerprinting refers to the process of extracting a robust, compact representation of audio which can be used to uniquely identify an audio segment. Works in the audio fingerprinting literature generally report results using system-level metrics. Because these systems are usually very complex, the overall system-level performance depends on many different factors. So, while these metrics are useful in understanding how well the entire system performs, they are not very useful in knowing how good or bad the fingerprint design is. In this work, we propose a metric of fingerprint effectiveness that decouples the effect of other system components such as the search mechanism or the nature of the database. The metric is simple, easy to compute, and has a clear interpretation from an information theory perspective. We demonstrate that the metric correlates directly with system-level metrics in assessing fingerprint effectiveness, and we show how it can be used in practice to diagnose the weaknesses in a fingerprint design.
Keywords: audio coding; audio signal processing; copy protection; signal representation; audio fingerprinting literature; audio representation extraction; audio segment; fingerprint effectiveness; information theoretic metric; search mechanism; system level metrics; system level performance; Accuracy; Databases; Entropy; Information rates; Noise measurement; Signal to noise ratio; audio fingerprint; copy detection (ID#: 15-7460)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177987&isnumber=7177909

 

Cheng Liu; So, H.K.-H., “Automatic Soft CGRA Overlay Customization for High-Productivity Nested Loop Acceleration on FPGAs,” in Field-Programmable Custom Computing Machines (FCCM), 2015 IEEE 23rd Annual International Symposium on, vol., no., pp. 101–101, 2–6 May 2015. doi:10.1109/FCCM.2015.57
Abstract: Compiling high level compute intensive kernels to FPGAs via an abstract overlay architecture has been demonstrated to be an effective way to improve designers’ productivity. However, achieving the desired performance and overhead constraints requires exploration in a complex design space involving multiple architectural parameters and counteracts the benefit of utilizing an overlay as a productivity enhancer. In this work, a soft CGRA (SCGRA) which provides unique opportunity to improve the power-performance of the resulting accelerators is used an FPGA overlay. With the observation that the loop unrolling factor and SCGRA size typically have monotonic impact on the loop compute time and the loop performance benefit degrades with the increase of the two design parameters, we took a marginal performance revenue metric to prune the design space to a small feasible design space (FDS) and then performed an intensive customization on the FDS by using analytical models of various design metrics such as power and overhead.
Keywords: field programmable gate arrays; logic design; FDS; FPGA; SCGRA size; abstract overlay architecture; accelerator power-performance; designer productivity; feasible design space; field programmable gate array; high-productivity nested loop acceleration; loop compute time; loop performance benefit; loop unrolling factor; productivity enhancer; soft CGRA overlay customization; Acceleration; Computer architecture; Field programmable gate arrays; Finite impulse response filters; Kernel; Measurement; Productivity; Design Productivity; FPGA Acceleration; Nested Loop Acceleration; Soft CGRA (ID#: 15-7461)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160051&isnumber=7160017

 

Syer, M.D.; Nagappan, M.; Adams, B.; Hassan, A.E., “Replicating and Re-Evaluating the Theory of Relative Defect-Proneness,” in Software Engineering, IEEE Transactions on, vol. 41, no. 2, pp. 176–197, Feb. 1 2015. doi:10.1109/TSE.2014.2361131
Abstract: A good understanding of the factors impacting defects in software systems is essential for software practitioners, because it helps them prioritize quality improvement efforts (e.g., testing and code reviews). Defect prediction models are typically built using classification or regression analysis on product and/or process metrics collected at a single point in time (e.g., a release date). However, current defect prediction models only predict if a defect will occur, but not when, which makes the prioritization of software quality improvements efforts difficult. To address this problem, Koru et al. applied survival analysis techniques to a large number of software systems to study how size (i.e., lines of code) influences the probability that a source code module (e.g., class or file) will experience a defect at any given time. Given that 1) the work of Koru et al. has been instrumental to our understanding of the size-defect relationship, 2) the use of survival analysis in the context of defect modelling has not been well studied and 3) replication studies are an important component of balanced scholarly debate, we present a replication study of the work by Koru et al. In particular, we present the details necessary to use survival analysis in the context of defect modelling (such details were missing from the original paper by Koru et al.). We also explore how differences between the traditional domains of survival analysis (i.e., medicine and epidemiology) and defect modelling impact our understanding of the size-defect relationship. Practitioners and researchers considering the use of survival analysis should be aware of the implications of our findings.
Keywords: program diagnostics; software quality; software reliability; defect modelling; relative defect-proneness theory; size-defect relationship; software system defects; source code module; survival analysis techniques; Analytical models; Data models; Hazards; Mathematical model; Measurement; Predictive models; Software; Cox Models; Cox models; Defect Modelling; Survival Analysis; Survival analysis; defect modeling (ID#: 15-7462)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914599&isnumber=7038242

 

Chuanqi Tao; Gao, Jerry; Bixin Li, “A Model-Based Framework to Support Complexity Analysis Service for Regression Testing of Component-Based Software,” in Service-Oriented System Engineering (SOSE), 2015 IEEE Symposium on, vol., no., pp. 326–331, March 30 2015–April 3 2015. doi:10.1109/SOSE.2015.42
Abstract: Today, software components have been widely used in software construction to reduce the cost of project and speed up software development cycle. During software maintenance, various software change approaches can be used to realize specific change requirements of software components. Different change approaches lead to diverse regression testing complexity. Such complexity is one of the key contributors to the cost and effectiveness of software maintenance. However, there is a lack of research work addressing regression testing complexity analysis service for software components. This paper proposes a framework to measure and analyze regression testing complexity based on a set of change and impact complexity models and metrics. The framework can provide services for complexity modeling, complexity factor classification, and regression testing complexity measurements. The initial study results indicate the proposed framework is feasible and effective in measuring the complexity of regression testing for component-based software.
Keywords: object-oriented programming; program testing; software maintenance; software metrics; complexity factor classification; complexity modeling; component-based software; model-based framework; project cost reduction; regression testing complexity analysis service; regression testing complexity measurements; software change approach; software components; software construction; software development cycle; Analytical models; Complexity theory; Computational modeling; Measurement; Software maintenance; Testing; component-based software regression testing; regression testing complexity; testing service (ID#: 15-7463)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133549&isnumber=7133490

 

Chun-Hung Liu; Beiyu Rong; Shuguang Cui, “Optimal Discrete Power Control in Poisson-Clustered Ad Hoc Networks,” in Wireless Communications, IEEE Transactions on, vol. 14, no. 1, pp. 138–151, Jan. 2015. doi:10.1109/TWC.2014.2334330
Abstract: Power control in a digital handset is practically implemented in a discrete fashion, and usually, such a discrete power control (DPC) scheme is suboptimal. In this paper, we first show that in a Poison-distributed ad hoc network, if DPC is properly designed with a certain condition satisfied, it can strictly work better than no power control (i.e., users use the same constant power) in terms of average signal-to-interference ratio, outage probability, and spatial reuse. This motivates us to propose an N-layer DPC scheme in a wireless clustered ad hoc network, where transmitters and their intended receivers in circular clusters are characterized by a Poisson cluster process on the plane ℝ2. The cluster of each transmitter is tessellated into N-layer annuli with transmit power Pi adopted if the intended receiver is located at the ith layer. Two performance metrics of transmission capacity (TC) and outage-free spatial reuse factor are redefined based on the N-layer DPC. The outage probability of each layer in a cluster is characterized and used to derive the optimal power scaling law Pi ∈ Θ(ηi-(α/2)), with ηi as the probability of selecting power Pi and α as the path loss exponent. Moreover, the specific design approaches to optimize Pi and N based on ηi are also discussed. Simulation results indicate that the proposed optimal N-layer DPC significantly outperforms other existing power control schemes in terms of TC and spatial reuse.
Keywords: ad hoc networks; probability; stochastic processes; N-layer DPC scheme; Poisson-clustered ad hoc networks; TC; optimal discrete power control; outage probability; outage-free spatial reuse factor; transmission capacity; wireless clustered ad hoc network; Ad hoc networks; Fading; Interference; Power control; Receivers; Transmitters; Wireless communication; Discrete power control; Poisson cluster process; stochastic geometry (ID#: 15-7464)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847161&isnumber=7004094

 

Bulbul, R.; Sapkota, P.; C.-W.Ten; L. Wang; Ginter, A., “Intrusion Evaluation of Communication Network Architectures for Power Substations,” in Power Delivery, IEEE Transactions on, vol. 30, no. 3, pp.1372–1382, June 2015. doi:10.1109/TPWRD.2015.2409887
Abstract: Electronic elements of a substation control system have been recognized as critical cyberassets due to the increased complexity of the automation system that is further integrated with physical facilities. Since this can be executed by unauthorized users, the security investment of cybersystems remains one of the most important factors for substation planning and maintenance. As a result of these integrated systems, intrusion attacks can impact operations. This work systematically investigates the intrusion resilience of the ten architectures between a substation network and others. In this paper, two network architectures comparing computer-based boundary protection and firewall-dedicated virtual local-area networks are detailed, that is, architectures one and ten. A comparison on the remaining eight architecture models was performed. Mean time to compromise is used to determine the system operational period. Simulation cases have been set up with the metrics based on different levels of attackers’ strength. These results as well as sensitivity analysis show that implementing certain architectures would enhance substation network security.
Keywords: firewalls; investment; local area networks; maintenance engineering; power system planning; safety systems; substation automation; substation protection; automation system; communication network architectures; computer-based boundary protection; cybersystems; electronic elements; firewall-dedicated virtual local-area networks; intrusion attacks; intrusion evaluation; intrusion resilience; power substations; security investment; sensitivity analysis; substation control system; substation maintenance; substation network security; substation planning; unauthorized users; Computer architecture; Modems; Protocols; Security; Servers; Substations; Tin; Cyberinfrastructure; electronic intrusion; network security planning; power substation (ID#: 15-7465)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054545&isnumber=7110680

 

Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang, “Face Sketch Synthesis via Sparse Representation-Based Greedy Search,” in Image Processing, IEEE Transactions on, vol. 24, no. 8, pp. 2466–2477, Aug. 2015. doi:10.1109/TIP.2015.2422578
Abstract: Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.
Keywords: face recognition; feature extraction; greedy algorithms; image representation; Bayesian inference; digital entertainment; face sketch synthesis; hair style; hairpins; image backgrounds; image patches; law enforcement; learnt dictionary; nonfacial factors; objective metric; perceptual metric; photo patch feature dictionary; searching process; sparse coefficients; sparse representation-based greedy search; test photo alignment aspect; test photo image size aspect; training photo patches; training photo-sketch pairs; training set; Bayes methods; Dictionaries; Face; Glass; Hidden Markov models; Image coding; Training; Face sketch synthesis; dictionary learning; fast index; greedy search (ID#: 15-7466)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084655&isnumber=7086144

 

Abo-Zahhad, M.; Ahmed, S.M.; Sabor, N.; Sasaki, S., “Utilisation of Multi-Objective Immune Deployment Algorithm for Coverage Area Maximisation with Limit Mobility in Wireless Sensors Networks,” in Wireless Sensor Systems, IET, vol. 5,
no. 5, pp. 250–261, Oct. 2015. doi:10.1049/iet-wss.2014.0085
Abstract: Coverage is one of the most important performance metrics for wireless sensor network (WSN) since it reflects how well a sensor field is monitored. The coverage issue in WSNs depends on many factors, such as the network topology, sensor sensing model and the most important one is the deployment strategy. Random deployment of the sensor nodes can cause coverage holes formulation. This problem is non-deterministic polynomial-time hard problem. So in this study, a new centralised deployment algorithm based on the immune optimisation algorithm is proposed to relocate the mobile nodes after the initial configuration to maximise the coverage area. Moreover, the proposed algorithm limits the moving distance of the mobile nodes to reduce the dissipation energy in mobility and to ensure the connectivity among the sensor nodes. The performance of the proposed algorithm is compared with the previous algorithms using Matlab simulation. Simulation results clear that the proposed algorithm based on binary and probabilistic sensing models improves the network coverage and the redundant covered area with minimum moving consumption energy. Furthermore, the simulation results show that the proposed algorithm also works when obstacles appear in the sensing field.
Keywords: computational complexity; mobility management (mobile radio); optimisation; probability; telecommunication network topology; wireless sensor networks; WSN; binary sensing model; centralised deployment algorithm; coverage area maximisation; coverage holes formulation; dissipation energy reduction; immune optimisation algorithm; limit mobility; mobile node relocation; multiobjective immune deployment algorithm; network coverage improvement; network topology; nondeterministic polynomial-time hard problem; performance metrics; probabilistic sensing model; random sensor node deployment; sensor sensing model
(ID#: 15-7467)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7277322&isnumber=7277314

 

Yu-Lin Chien; Lin, K.C.-J.; Ming-Syan Chen, “Machine Learning Based Rate Adaptation with Elastic Feature Selection for HTTP-Based Streaming,” in Multimedia and Expo (ICME), 2015 IEEE International Conference on, vol., no., pp. 1–6, June 29 2015–July 3 2015. doi:10.1109/ICME.2015.7177418
Abstract: Dynamic Adaptive Streaming over HTTP (DASH) has become an emerging application nowadays. Video rate adaptation is a key to determine the video quality of HTTP-based media streaming. Recent works have proposed several algorithms that allow a DASH client to adapt its video encoding rate to network dynamics. While network conditions are typically affected by many different factors, these algorithms however usually consider only a few representative information, e.g., predicted available bandwidth or fullness of its playback buffer. In addition, the error in bandwidth estimation could significantly degrade their performance. Therefore, this paper presents Machine Learning-based Adaptive Streaming over HTTP (MLASH), an elastic framework that exploits a wide range of useful network-related features to train a rate classification model. The distinct properties of MLASH are that its machine learning-based framework can be incorporated with any existing adaptation algorithm and utilize big data characteristics to improve prediction accuracy. We show via trace-based simulations that machine learning-based adaptation can achieve a better performance than traditional adaptation algorithms in terms of their target quality of experience (QoE) metrics.
Keywords: feature selection; hypermedia; learning (artificial intelligence); media streaming; pattern classification; quality of experience; video coding; DASH client; HTTP-based media streaming; HTTP-based streaming; MLASH; QoE metrics; adaptation algorithm; bandwidth estimation; big data characteristics; dynamic adaptive streaming over HTTP; elastic feature selection; machine learning based rate adaptation; machine learning-based adaptation; machine learning-based adaptive streaming over HTTP; machine learning-based framework; network condition; network dynamics; network-related feature; playback buffer; prediction accuracy; rate classification model; representative information; target quality of experience metrics; trace-based simulation; video encoding rate; video quality; video rate adaptation; Bandwidth; Lead; Servers; Streaming media; Training; HTTP Streaming; Machine Learning; Rate Adaptation (ID#: 15-7468)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177418&isnumber=7177375

 

Kaur, P.P.; Singh, H.; Singh, M., “Evaluation of Architecture of Component Based System,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 852–857, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.170
Abstract: Being widely used in industries and organizations, Component based engineering is a technology for systems, highly successful in providing a product with high quality of functionality at low cost. This paper can enlighten beginners to the field of component based system and evaluating its architecture on certain parameters. Component based engineering can help to see how accurate, reliable or secure a system can respond, commonly named as system’s non-functional properties. In this paper, the approach to evaluate the architecture of component based system is based on non-functional property, ‘Performance’. Performance attribute ensures for the smooth and efficient operation of the software system. Next the architecture is evaluated at an early level of design which can be useful in a way, whether the architecture can meet the desired performance specifications or not thus saving cost. We analyzed the results over standard performance parameters namely response time, throughput and resource utilization. Logically, First the system over component based architecture is proposed, here SDLC (System development life cycle), next we model the architecture for performance over performance model. Here, MPFQN (Multichain PFQN) performance model for Iterative SDLC which work on component based system is used. The system was observed over some assumptions and scheduling disciplines given to various architectural elements. Varying the scheduling disciplines in the model gave varying results on performance parameters, that are observed using SHARPE Tool. The work studied in this paper was built on assumptions and simulations which can be extended to put in some real case study for some organization system.
Keywords: object-oriented programming; software architecture; SDLC; SHARPE tool; component based engineering; component based system architecture; iterative SDLC; multichain PFQN performance model; performance attribute; system development life cycle; Computational modeling; Computer architecture; Mathematical model; Software; Throughput; Time factors; Unified modeling language; model based evaluation; multichain pfqn; performance metrics; queueing network; system architecture (ID#: 15-7469)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155968&isnumber=7155781

 

Tzoumas, V.; Rahimian, M.A.; Pappas, G.J.; Jadbabaie, A., “Minimal Actuator Placement with Bounds on Control Effort,” in Control of Network Systems, IEEE Transactions on, vol. 3, no. 1, pp. 67–78, March 2016. doi:10.1109/TCNS.2015.2444031
Abstract: We address the problem of minimal actuator placement in a linear system subject to an average control energy bound. First, following the recent work of Olshevsky, we prove that this is NP-hard. Then, we provide an efficient algorithm which, for a given range of problem parameters, approximates up to a multiplicative factor of O(log n), n being the network size, any optimal actuator set that meets the same energy criteria; this is the best approximation factor one can achieve in polynomial time, in the worst case. Moreover, the algorithm uses a perturbed version of the involved control energy metric, which we prove to be supermodular. Next, we focus on the related problem of cardinality-constrained actuator placement for minimum control effort, where the optimal actuator set is selected so that an average input energy metric is minimized. While this is also an NP-hard problem, we use our proposed algorithm to efficiently approximate its solutions as well. Finally, we run our algorithms over large random networks to illustrate their efficiency.
Keywords: Actuators; Aerospace electronics; Approximation algorithms; Approximation methods; Controllability; Measurement; Controllability Energy Metrics; Input Placement; Leader Selection; Minimal Network Controllability; Multi-agent Networked Systems (ID#: 15-7470)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122316&isnumber=6730648

 

Hale, M.L.; Gamble, R.; Hale, J.; Haney, M.; Lin, J.; Walter, C., “Measuring the Potential for Victimization in Malicious Content,” in Web Services (ICWS), 2015 IEEE International Conference on, vol., no., pp. 305–312, June 27 2015–July 2 2015. doi:10.1109/ICWS.2015.49
Abstract: Sending malicious content to users for obtaining personnel, financial, or intellectual property has become a multi-billion dollar criminal enterprise. This content is primarily presented in the form of emails, social media posts, and phishing websites. User training initiatives seek to minimize the impact of malicious content through improved vigilance. Training works best when tailored to specific user deficiencies. However, tailoring training requires understanding how malicious content victimizes users. In this paper, we link a set of malicious content design factors, in the form of degradations and sophistications, to their potential to form a victimization prediction metric. The design factors examined are developed from an analysis of over 100 pieces of content from email, social media and websites. We conducted an experiment using a sample of the content and a game-based simulation platform to evaluate the efficacy of our victimization prediction metric. The experimental results and their analysis are presented as part of the evaluation.
Keywords: Internet; computer crime; social networking (online); trusted computing; unsolicited e-mail; e-mails; game-based simulation platform; malicious content; multibillion dollar criminal enterprise; phishing Web sites; social media posts; victimization prediction metric; Degradation; Electronic mail; Games; Measurement; Media; Taxonomy; Training; content assessment; maliciousness; metrics; phishing; trust; trust factors; user training; victimization (ID#: 15-7471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195583&isnumber=7195533

 

Khabbaz, M.; Assi, C., “Modelling and Analysis of a Novel Deadline-Aware Scheduling Scheme for Cloud Computing Data Centers,” in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, October 2015. doi:10.1109/TCC.2015.2481429
Abstract: User Request (UR) service scheduling is a process that significantly impacts the performance of a cloud data center. This is especially true since essential Quality-of-Service (QoS) performance metrics such as the UR blocking probability as well as the data center?s response time are tightly coupled to such a process. This paper revolves around the proposal of a novel Deadline-Aware UR Scheduling Scheme (DASS) that has the objective of improving the data center?s QoS performance in term of the above-mentioned metrics. A minority of existing work in the literature targets the formulation of mathematical models for the purpose of characterizing a cloud data center?s performance. As a contribution to covering this gap, this paper presents an analytical model, which is developed for the purpose of capturing the system?s dynamics and evaluating its performance when operating under DASS. The model?s results and their accuracy are verified through simulations. In addition, the performance of the data center achieved under DASS is compared to its counterpart achieved under the more generic First-In-First- Out (FIFO) scheme. The reported results indicate that DASS outperforms FIFO by 11% to 58% in terms of the blocking probability and by 82% to 89% in terms of the system?s response time.
Keywords: Analytical models; Bandwidth; Cloud computing; Data models; Mathematical model; Quality of service; Time factors; Analysis; Cloud; Data Center; Modelling; Performance (ID#: 15-7472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274716&isnumber=6562694

 

Goderie, J.; Georgsson, B.M.; van Graafeiland, B.; Bacchelli, A., “ETA: Estimated Time of Answer Predicting Response Time in Stack Overflow,” in Mining Software Repositories (MSR), 2015 IEEE/ACM 12th Working Conference on, vol., no., pp. 414–417,
16–17 May 2015. doi:10.1109/MSR.2015.52
Abstract: Question and Answer (Q&A) sites help developers dealing with the increasing complexity of software systems and third-party components by providing a platform for exchanging knowledge about programming topics. A shortcoming of Q&A sites is that they provide no indication on when an answer is to be expected. Such an indication would help, for example, the developers who posed the questions in managing their time. We try to fill this gap by investigating whether and how answering time for a question posed on Stack Overflow, a prominent example of Q&A websites, can be predicted considering its tags. To this aim, we first determine the types of answers to be considered valid answers to the question, after which the answering time was predicted based on similarity of the set of tags. Our results show that the classification is correct in 30%-35% of the cases.
Keywords: question answering (information retrieval); software metrics; Stack Overflow; question and answer sites; software system complexity; third-party components; Communities; Correlation; Prediction algorithms; Time factors; Time measurement; Training; response time; stack overflow (ID#: 15-7473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180106&isnumber=7180053

 

Yong Jin; Xin Yang; Kula, Raula Gaikovina; Eunjong Choi; Inoue, Katsuro; Iida, Hajimu, “Quick Trigger on Stack Overflow: A Study of Gamification-Influenced Member Tendencies,” in Mining Software Repositories (MSR), 2015 IEEE/ACM 12th Working Conference on , vol., no., pp. 434–437, 16–17 May 2015. doi:10.1109/MSR.2015.57
Abstract: In recent times, gamification has become a popular technique to aid online communities stimulate active member participation. Gamification promotes a reward-driven approach, usually measured by response-time. Possible concerns of gamification could a trade-off between speedy over quality responses. Conversely, bias toward easier question selection for maximum reward may exist. In this study, we analyze the distribution gamification-influenced tendencies on the Q&A Stack Overflow online community. In addition, we define some gamification-influenced metrics related to response time to a question post. We carried experiments of a four-month period analyzing 101,291 members posts. Over this period, we determined a Rapid Response time of 327 seconds (5.45 minutes). Key findings suggest that around 92% of SO members have fewer rapid responses that non-rapid responses. Accepted answers have no clear relationship with rapid responses. However, we did find that rapid responses significantly contain tags that did not follow their usual tagging tendencies.
Keywords: computer games; question answering (information retrieval); social networking (online); software metrics; Q&A Stack Overflow online community; SO members; active member participation; distribution gamification-influenced tendencies; gamification-influenced member tendencies; gamification-influenced metrics; rapid response time; reward-driven approach; Communities; Context; Data mining; Measurement; Software; Tagging; Time factors; Gamification; Mining Software Repositories; Online Community tendencies (ID#: 15-7474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180111&isnumber=7180053

 

Medina, V.; Lafon-Pham, D.; Paljic, A.; Diaz, E., “Physically Based Image Synthesis of Materials : A Methodology Towards
the Visual Comparison of Physical vs. Virtual Samples,”
in Colour and Visual Computing Symposium (CVCS), 2015, pp. 1–6, 25–26 Aug. 2015. doi:10.1109/CVCS.2015.7274878
Abstract: The assessment of images of complex materials on an absolute scale is difficult for a human observer. Comparing physical and virtual samples side-by-side simplifies the task by introducing a reference. The goal of this article is to study the influence of image exposure on the perception of realism on images of paint materials containing sparkling metallic flakes. We use a radiometrically calibrated DSLR camera to acquire high resolution raw photographs of our physical samples which provide us with radiometric information from the samples. This is combined with the data obtained from the calibration of a stereoscopic display and shutter glasses to transform the raw photographs into images that can be shown by the display, controlling the colorimetric output signal. This ensures that we can transform our data back and forth between a radiometric and a colorimetric representation, minimizing the loss of information throughout the chain of acquisition and visualization. In this article we propose a paired comparison scenario that improves the results from our previous work, focusing on three main aspects: stereoscopy, exposure time, and dynamic range. Our results show that observers consider stereoscopy as the most important factor of the three for judging the similarity of these images to the reference, followed by exposure time and dynamic range, which supports our claims from previous research.
Keywords: Image color analysis; Lighting; Observers; Paints; Radiometry; Stereo image processing; Visualization; Human visual system; Paired comparison; Perceptual quality metrics; Physically-based rendering; Texture perception (ID#: 15-7475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274878&isnumber=7274875

 

Erfanian, Aida; Yaoping Hu, “Conflict Resolution Models on Usefulness Within Multi-User Collaborative Virtual Environments,” in 3D User Interfaces (3DUI), 2015 IEEE Symposium on, vol., no., pp. 147–148, 23–24 March 2015. doi:10.1109/3DUI.2015.7131743
Abstract: Conflict resolution models play key roles in coordinating simultaneous interactions in multi-user collaborative virtual environments (VEs). Currently, conflict resolution models are first-come-first-serve (FCFS) and dynamic priority (DP). Known to be unfair, the FCFS model grants all interaction opportunities to the agilest user. Instead, the DP model permits all users the perception of equality in interaction. Nevertheless, it remains unclear whether the perception of equality in interaction could impact the usefulness of multi-user collaborative VEs. Thus, this present work compared the FCFS and DP models for underlying the usefulness of multi-user collaborative VEs. This comparison was undertaken based on a metrics of usefulness (i.e., task focus, decision time, and consensus), which we defined according to the ISO/IEC 205010:2011 standard. This definition remedied the current metrics of usefulness that measures actually effectiveness and efficiency of target technologies, instead of their usefulness. On our multi-user collaborative VE, we observed that the DP model yielded significantly lower decision time and higher consensus than the FCFS model. There was, however, no significant difference of task focus between both models. These observations imply a potential to improve multi-user collaborative VEs.
Keywords: IEC standards; ISO standards; human computer interaction; human factors; virtual reality; DP model; FCFS model; ISO/IEC 205010:2011 standard; conflict resolution models; consensus; decision time; dynamic priority model; first-come-first-serve model; multiuser collaborative VE; multiuser collaborative virtual environments; simultaneous interaction coordination; task focus; usefulness metrics; user equality perception; Analysis of variance; Collaboration; Computational modeling; Measurement; Standards; Testing; Virtual environments; Conflict resolution models; multi-user collaborative virtual environments; usefulness (ID#: 15-7476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131743&isnumber=7131667


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.