Science of Security (SoS) Newsletter (2014 - Issue 3)

SoS Newsletter Banner

2014 - Issue #03


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

General Topics of Interest

General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

Publications

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents (Issue 3)

(ID#:14-2259)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News (2014 - Issue 3)

In the News (2014-03)


  • The Limits of Packet Capture, Information Security Buzz, 21 July 2014. Packet capture data can be invaluable in digital forensics, but it is important to remember that it has its shortcomings. There are many scenarios where packet capture does not provide complete data, or is insufficient at accomplishing what it is often intended to do. (ID#: 14-50032) See http://www.informationsecuritybuzz.com/limits-packet-capture/
  • Sestus Warns Users to Be Careful of Business Centre Networks, Information Security Buzz, 22 July 2014. With the summer travel season at its peak, the U.S. Secret Service warned the hospitality industry of the cyber threat posed by public PCs and networks. Cyber criminals can introduce malware onto these systems that can sniff for passwords other personal, sensitive information. (ID#: 14-50033) See http://www.informationsecuritybuzz.com/sestus-warns-users-careful-business-centre-networks/
  • We Are Better Protected When We Work Together, Information Security Buzz, 21 July 2014. The cyber threats of today require advanced technologies and sophisticated methods to thwart, but at the end of the day, threat sharing is one of the greatest tools against such threats. Working together can help identify patterns in cyber attacks and disrupt the activities of cyber criminals. (ID#: 14-50034) See http://www.informationsecuritybuzz.com/better-protected-work-together/
  • What is memory safety?, Programming Languages Enthusiast, 21 July 2014 (Blog post). An attempt to define and examine memory safety, specifically in the C programming language. Buffer overflows, dynamic memory errors, out-of-memory errors, and misuse of pointers can cause instabilities and vulnerabilities in programs. (ID#: 14-50039) See http://www.pl-enthusiast.net/2014/07/21/memory-safety/
  • Organizations Slow at Patching Heartbleed in VMware Deployments: Report, SecurityWeek, 25 July 2014. Despite VMwareis releast of patches to address the Heartbleed vulnerability, a substantial number of organizations are vulnerable to Heartbleed attacks. The number of vulnerable systems decreased dramatically in the weeks following the disclosure of the infamous bug, but it might take years before all of them have been patched. (ID#: 14-50040) See http://www.securityweek.com/organizations-slow-patching-heartbleed-vmware-deployments-report
  • Fake Googlebots Increasingly Serve as Tools for DDoS, SecurityWeek, 24 July 2014. Bots are used by Google to help index the web and provide search results, but fake Googlebots are being used for malicious purposes. These impostors can be used for acquiring marketing data, hacking, spamming, and even executing layer 7 DDoS attacks, in an increasing trend. (ID#: 14-50041) See http://www.securityweek.com/fake-googlebots-increasingly-serve-tools-ddos-incapsula
  • UK Travel Company Fined After Card Data Hack, SecurityWeek, 25 July 2014. Think W3 Ltd., a UK-based travel company, was fined PS150,000 after a lapse in security compromised payment card details of over 1.1 million customers. The hackers were able to SQL injection on a login page for an internal system, enabling them to acquire administrative access and thus obtain data held on the server. (ID#: 14-50042) See http://www.securityweek.com/uk-travel-company-fined-after-card-data-hack

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest (2014 - Issue 3)

Publications of Interest (2014-03)


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: research (at) SecureDataBank.net

(ID#:14-2260)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

(ID#:14-2260)

Data Sanitization

Data Sanitization


For security researchers, privacy protection during data mining is a major concern. Sharing information over the Internet or holding it in a database requires methods of sanitizing data so that personal information cannot be obtained. The methods described in the articles listed here include SQL injections, provenance workflows, item set hiding, differential privacy, and a framework for a mathematical definition of privacy.

  • Mihai Maruseac, Gabriel Ghinita, Razvan Rughinis, "Privacy-preserving Publication Of Provenance Workflows," CODASPY '14 Proceedings of the 4th ACM conference on Data and Application Security And Privacy , March 2014, (Pages 159-162). (ID#:14-1558) Available at: http://dl.acm.org/citation.cfm?id=2557547.2557586&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 Provenance workflows capture the data movement and the operations changing the data in complex applications such as scientific computations, document management in large organizations, content generation in social media, etc. Provenance is essential to understand the processes and operations that data undergo, and many research efforts focused on modeling, capturing and analyzing provenance information. Sharing provenance brings numerous benefits, but may also disclose sensitive information, such as secret processes of synthesizing chemical substances, confidential business practices and private details about social media participants' lives. In this paper, we study privacy-preserving provenance workflow publication using differential privacy. We adapt techniques designed for sanitization of multi-dimensional spatial data to the problem of provenance workflows. Experimental results show that such an approach is feasible to protect provenance workflows, while at the same time retaining a significant amount of utility for queries. In addition, we identify influential factors and trade-offs that emerge when sanitizing provenance workflows. Keywords: privacy, provenance
  • Vasileios Kagklis, Vassilios S. Verykios, Giannis Tzimas, Athanasios K. Tsakalidis, "Knowledge Sanitization on the Web," WIMS '14 Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14), June 2014, Article No. 4. (ID#:14-1559) Available at: http://dl.acm.org/citation.cfm?id=2611040.2611044&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 The widespread use of the Internet caused the rapid growth of data on the Web. But as data on the Web grew larger in numbers, so did the perils due to the applications of data mining. Privacy preserving data mining (PPDM) is the field that investigates techniques to preserve the privacy of data and patterns. Knowledge Hiding, a subfield of PPDM, aims at preserving the sensitive patterns included in the data, which are going to be published. A wide variety of techniques fall under the umbrella of Knowledge Hiding, such as frequent pattern hiding, sequence hiding, classification rule hiding and so on. In this tutorial we create a taxonomy for the frequent itemset hiding techniques. We also provide as examples for each category representative works that appeared recently and fall into each one of these categories. Then, we focus on the detailed overview of a specific category, the so called linear programming-based techniques. Finally, we make a quantitative and qualitative comparison among some of the existing techniques that are classified into this category. Keywords: Frequent Itemset Hiding, Knowledge Hiding, LP-Based Hiding Approaches, Privacy Preserving Data Mining
  • Madhushri Banerjee, Zhiyuan Chen, Aryya Gangopadhyay, "A Generic and Distributed Privacy Preserving Classification Method with A Worst-Case Privacy Guarantee," Distributed and Parallel Databases,Volume 32 Issue 1, March 2014, (Pages 5-35). (ID#:14-1560) Available at: http://dl.acm.org/citation.cfm?id=2589730.2589736&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This discusses the development of privacy preserving distributed data mining in response to the security risks involved with data mining. Current methods of data mining can either only handle a single mining task, or experience increased overhead when attempting multiple tasks. This paper takes these challenges into consideration, and explores a generic approach to efficient privacy preserving classification. Keywords: Classification, Data mining, Privacy preserving data mining
  • Christos Kalloniatis, Haralambos Mouratidis, Manousakis Vassilis, Shareeful Islam, Stefanos Gritzalis, Evangelia Kavakli, "Towards the Design Of Secure And Privacy-Oriented Information Systems In The Cloud: Identifying The Major Concepts," Computer Standards & Interfaces, Volume 36 Issue 4, June, 2014, (Pages 759-775). (ID#:14-1561) Available at: http://dl.acm.org/citation.cfm?id=2588915.2589310&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This paper emphasizes the different security challenges between cloud architecture and traditional distributed systems. The authors stress the imperative nature of thoroughly understanding security in the cloud environment in order to design secure cloud systems. Keywords: Cloud computing, Concepts, Privacy, Requirements, Security, Security and Privacy Issues
  • Daniel Kifer, Ashwin Machanavajjhala, "Pufferfish: A Framework for Mathematical Privacy Definitions," ACM Transactions on Database Systems (TODS) Volume 39 Issue 1, January 2014, Article No. 3. (ID#:14-1562) Available at: http://dl.acm.org/citation.cfm?id=2576988.2514689&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 In this article, we introduce a new and general privacy framework called Pufferfish. The Pufferfish framework can be used to create new privacy definitions that are customized to the needs of a given application. The goal of Pufferfish is to allow experts in an application domain, who frequently do not have expertise in privacy, to develop rigorous privacy definitions for their data sharing needs. In addition to this, the Pufferfish framework can also be used to study existing privacy definitions. We illustrate the benefits with several applications of this privacy framework: we use it to analyze differential privacy and formalize a connection to attackers who believe that the data records are independent; we use it to create a privacy definition called hedging privacy, which can be used to rule out attackers whose prior beliefs are inconsistent with the data; we use the framework to define and study the notion of composition in a broader context than before; we show how to apply the framework to protect unbounded continuous attributes and aggregate information; and we show how to use the framework to rigorously account for prior data releases. Keywords: Privacy, differential privacy
  • Younsung Choi, Donghoon Lee, Woongryul Jeon, Dongho Won, "Password-based Single-File Encryption and Secure Data Deletion for Solid-State Drive," ICUIMC '14 Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication, January 2014,Article No. 5. (ID#:14-1563) Available at: http://dl.acm.org/citation.cfm?id=2557977.2558072&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 Recently, SSD sales are on the steady rise. The reason of sales is that SSD is faster and smaller than HDD. Therefore SSD serves as a typical alternative to HDD. In fact, SSD considerably emulates the technology of HDD such as the communication protocol and hardware interfaces. So, the technology of HDD can quickly be adapted to SSD. However, SSD slightly differ from HDD in the way of storing, managing and accessing the data. Because of the difference between SSD and HDD, it is possible that technology and command of HDD are not accurately operated on SSD. So various problems on the technology of SSD have occurred gradually including encryption and deletion. To solve this problem, we have to analyze the method of data encryption and secure data deletion suitable for SSD. In this paper, we research significant technology and security problem relevant to SSD. On the basis of analysis about SSD problems, we propose the password-based single-file encryption and secure data deletion for SSD and compare the previous researches with proposed method. Keywords: encryption, secure deletion, solid state drive
  • Xiaowei Li, Yuan Xue, "A Survey On Server-Side Approaches To Securing Web Applications," ACM Computing Surveys (CSUR) Surveys, Volume 46 Issue 4, April 2014, Article No. 54. (ID#:14-1564) Available at: http://dl.acm.org/citation.cfm?id=2597757.2541315&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 Web applications are one of the most prevalent platforms for information and service delivery over the Internet today. As they are increasingly used for critical services, web applications have become a popular and valuable target for security attacks. Although a large body of techniques have been developed to fortify web applications and mitigate attacks launched against them, there has been little effort devoted to drawing connections among these techniques and building the big picture of web application security research. This article surveys the area of securing web applications from the server side, with the aim of systematizing the existing techniques into a big picture that promotes future research. We first present the unique aspects of the web application development that cause inherent challenges in building secure web applications. We then discuss three commonly seen security vulnerabilities within web applications: input validation vulnerabilities, session management vulnerabilities, and application logic vulnerabilities, along with attacks that exploit these vulnerabilities. We organize the existing techniques along two dimensions: (1) the security vulnerabilities and attacks that they address and (2) the design objective and the phases of a web application during which they can be carried out. These phases are secure construction of new web applications, security analysis/testing of legacy web applications, and runtime protection of legacy web applications. Finally, we summarize the lessons learned and discuss future research opportunities in this area. Keywords: Web application security, application logic vulnerability, input validation vulnerability, session management vulnerability
  • Prithvi Bisht, Timothy Hinrichs, Nazari Skrupsky, V. N. Venkatakrishnan, "Automated Detection Of Parameter Tampering Opportunities And Vulnerabilities In Web Applications," Journal of Computer Security, Volume 22 Issue 3, May 2014, (Pages 415-465). (ID#:14-1565) Available at: http://dl.acm.org/citation.cfm?id=2597910.2597913&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This paper reviews the definition of parameter tampering vulnerabilities, and discusses an overview approach for parameter tampering detection. The challenges of this approach are explored, and consider both blackbox and whitebox settings in terms of a detection solution. Results of testing are explained, along with a conducted survey of current defense methods and their proficiencies. Keywords: Dynamic Monitoring, Parameter Tampering Attacks, Symbolic Evaluation
  • Julian Thome, Alessandra Gorla, Andreas Zeller, "Search-based Security Testing of Web Applications," SBST 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing, June 2014, (Pages 5-14). (ID#:14-1566) Available at: http://dl.acm.org/citation.cfm?id=2593833.2593835&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 QL injections are still the most exploited web application vulnerabilities. We present a technique to automatically detect such vulnerabilities through targeted test generation. Our approach uses search-based testing to systematically evolve inputs to maximize their potential to expose vulnerabilities. Starting from an entry URL, our BIOFUZZ prototype systematically crawls a web application and generates inputs whose effects on the SQL interaction are assessed at the interface between Web server and database. By evolving those inputs whose resulting SQL interactions show best potential, BIOFUZZ exposes vulnerabilities on real-world Web applications within minutes. As a black-box approach, BIOFUZZ requires neither analysis nor instrumentation of server code; however, it even outperforms state-of-the-art white-box vulnerability scanners. Keywords: SQL injections, Search-based testing, Security testing
  • Riboni, D.; Villani, A.; Vitali, D.; Bettini, C.; Mancini, L.V., "Obfuscation of Sensitive Data for Incremental Release of Network Flows," Networking, IEEE/ACM Transactions on, vol.PP, no.99, pp.1,1, March 2014. (ID#:14-1567) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6774971&isnumber=4359146 Large datasets of real network flows acquired from the Internet are an invaluable resource for the research community. Applications include network modeling and simulation, identification of security attacks, and validation of research results. Unfortunately, network flows carry extremely sensitive information, and this discourages the publication of those datasets. Indeed, existing techniques for network flow sanitization are vulnerable to different kinds of attacks, and solutions proposed for microdata anonymity cannot be directly applied to network traces. In our previous research, we proposed an obfuscation technique for network flows, providing formal confidentiality guarantees under realistic assumptions about the adversary's knowledge. In this paper, we identify the threats posed by the incremental release of network flows, we propose a novel defense algorithm, and we formally prove the achieved confidentiality guarantees. An extensive experimental evaluation of the algorithm for incremental obfuscation, carried out with billions of real Internet flows, shows that our obfuscation technique preserves the utility of flows for network traffic analysis. Keywords: Data privacy; Encryption; P networks; Knowledge engineering; Privacy; Uncertainty; Data sharing; network flow analysis; privacy; security

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Embedded System Security

Embedded System Security


Embedded Systems Security aims for a comprehensive view of security across hardware, platform software (including operating systems and hypervisors), software development processes, data protection protocols (both networking and storage), and cryptography. Critics say embedded device manufacturers often lack maturity when it comes to designing secure embedded systems. They say vendors in the embedded device and critical infrastructure market are starting to conduct classic threat modeling and risk analysis on their equipment, but they've not matured to to the point of developing formal secure development standards. Research is beginning to bridge the gap between promise and performance, as the articles cited here, suggest.

  • Dejun Mu; Wei Hu; Baolei Mao; Bo Ma, "A Bottom-Up Approach To Verifiable Embedded System Information Flow Security," Information Security, IET , vol.8, no.1, pp.12,17, Jan. 2014. (ID#:14-1662) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687153&isnumber=6687150 With the wide deployment of embedded systems and constant increase in their inter-connections, embedded systems tend to be confronted with attacks through security holes that are hard to predict using typical security measures such as access control or data encryption. To eliminate these security holes, embedded security should be accounted for during the design phase from all abstraction levels with effective measures taken to prevent unintended interference between different system components caused by harmful flows of information. This study proposes a bottom-up approach to designing verifiably information flow secure embedded systems. The proposed method enables tight information flow controls by monitoring all flows of information from the level of Boolean gates. It lays a solid foundation to information flow security in the underlying hardware and exposes the ability to prove security properties to all abstraction levels in the entire system stack. With substantial amounts of modifications made to the instruction set architecture, operating system, programming language and input/output architecture, the target system can be designed to be verifiably information flow secure. Keywords: embedded systems; formal verification; instruction sets; operating systems (computers);security of data; access control; bottom up approach; data encryption; information flow controls; input-output architecture; instruction set architecture; operating system; programming language; security holes; verifiable embedded system information flow security
  • Apostolos P. Fournaris, Nicolas Sklavos, "Secure Embedded System Hardware Design - A Flexible Security And Trust Enhanced Approach," Computers and Electrical Engineering, Volume 40 Issue 1, January 2014, (Pages 121-133). (ID#:14-1663) Available at: http://dl.acm.org/citation.cfm?id=2577586.2577712&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 This paper explores the vulnerabilities and risks associated with embedded systems and data collection, particularly stemming from the advent of new smart devices (mobile phones, cars, household technology). From an ES hardware perspective, the authors of this paper analyzes various physical attacks, and explores countermeasures in terms of reconfigurable logic flexibility, adaptability, and scalability. This paper applies to aforementioned criteria to a proposed FPGA-based embedded system hardware architecture.ly realistic options for embedded system security enhancement. Keywords: Embedded system security, Hardware design, Physical attacks, Reconfigurable logic, Trusted computing
  • Hatzivasilis, George; Papaefstathiou, Ioannis; Manifavas, Charalampos; Papadakis, Nikos, "A Reasoning System for Composition Verification and Security Validation," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp.1,4, March 30 2014-April 2 2014. (ID#:14-1664) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963 The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language. Keywords: (not provided)
  • Al-Jarrah, Omar; Arafat, Ahmad, "Network Intrusion Detection System using attack behavior classification," Information and Communication Systems (ICICS), 2014 5th International Conference on , vol., no., pp.1,6, 1-3 April 2014. (ID#:14-1665) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931 This paper discusses Probes attacks, or reconnaissance attacks, which attempts to collect pertinent information on the network. The authors of this paper propose embedding temporal attack behavior into a TDNN neural network system, in order to more efficiently and quantitatively recognize rate of network attacks. The projected system will feature five modules, consisting of a packet capture engine, preprocessor, pattern recognition, classification, monitoring, and alert. Keywords: IP networks; Intrusion detection; Neural networks; Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network
  • Pierre Schnarz, Joachim Wietzke, Ingo Stengel, "Towards Attacks On Restricted Memory Areas Through Co-Processors In Embedded Multi-OS Environments Via Malicious Firmware Injection," CS2 '14 Proceedings of the First Workshop on Cryptography and Security in Computing Systems, January 2014, (Pages 25-30). (ID#:14-1666) Available at: http://dl.acm.org/citation.cfm?id=2556315.2556318&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 Multi-operating systems have been introduced to manage the manifold requirements of embedded systems. Especially in safety critical environments like the automotive domain the system's security must be guaranteed. Despite the state-of-the-art virtualization mechanisms, the idea of asymmetric-multi-processing can be used to split a system's hardware resources, which makes the virtualization of hardware obsolete. However, this special technique to implement a multi-operating system might add special demands to security objectives like isolation. In this paper an attack vector is shown, which utilizes a co-processor to break through the isolation of an operating system domain. Using a multi-operating system environment, we inject a malicious firmware into the co-processor in order to circumvent isolation mechanisms on behalf of an attacking operating system. Our attack vector demonstrates weaknesses in CPU centric isolation mechanisms, which will be further presented in the remainder of the document. Keywords: (not provided)
  • Subramanian, N.; Zalewski, J., "Quantitative Assessment of Safety and Security of System Architectures for Cyberphysical Systems Using the NFR Approach," Systems Journal, IEEE, vol. PP, no.99, pp.1,13, January 2014. (ID#:14-1667) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6705608&isnumber=4357939 Cyberphysical systems (CPSs) are an integral part of modern societies since most critical infrastructures are controlled by these systems. CPSs incorporate computer-based and network-based technologies for the monitoring and control of physical processes. Two critically important properties of CPSs are safety and security. It is widely accepted that properties such as safety and security should be considered at the system design phase itself, particularly at the architectural level wherein such properties are embedded in the final system. However, safety and security are interrelated, and there seems to be a lack of techniques that consider both of them together. The nonfunctional requirement (NFR) approach is a technique that allows the simultaneous evaluation of both safety and security at the architectural level. In this paper, we apply the NFR approach to quantitatively evaluate the safety and security properties of an example CPS, i.e., an oil pipeline control system. We conclude that the NFR approach provides practical results that can be used by designers and developers to create safe and secure CPSs. Keywords: Cyberphysical systems (CPSs); nonfunctional requirement (NFR) approach; safety; security; system architecture assessment
  • Strobel, D.; Oswald, D.; Richter, B.; Schellenberg, F.; Paar, C., "Microcontrollers as (In)Security Devices for Pervasive Computing Applications," Proceedings of the IEEE , vol.PP, no.99, pp.1,17, June 2014. (ID#:14-1668) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6826474&isnumber=4357935 Often overlooked, microcontrollers are the central component in embedded systems which drive the evolution toward the Internet of Things (IoT). They are small, easy to handle, low cost, and with myriads of pervasive applications. An increasing number of microcontroller-equipped systems are security and safety critical. In this tutorial, we take a critical look at the security aspects of today's microcontrollers. We demonstrate why the implementation of sensitive applications on a standard microcontroller can lead to severe security problems. To this end, we summarize various threats to microcontroller-based systems, including side-channel analysis and different methods for extracting embedded code. In two case studies, we demonstrate the relevance of these techniques in real-world applications: Both analyzed systems, a widely used digital locking system and the YubiKey 2 onetime password generator, turned out to be susceptible to attacks against the actual implementations, allowing an adversary to extract the cryptographic keys which, in turn, leads to a total collapse of the system security. Keywords: Algorithm design and analysis; Clocks; Cryptography; Field programmable gate arrays; Microcontrollers; Registers; Code extraction; microcontroller; real-world attacks; reverse engineering; side-channel analysis
  • Turkoglu, Cagin; Cagdas, Serhat; Celebi, Anil; Erturk, Sarp, "Hardware Design of An Embedded Real-Time Acoustic Source Location Detector," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,4, March 30 2014-April 2 2014. (ID#:14-1669) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814022&isnumber=6813963 This paper presents an embedded system that detects the 3 dimensional location of an acoustic source using a multiple microphone constellation. The system consists of a field programmable gate array (FPGA)that is used as main processing unit and the necessary peripherals. The sound signals are captured using multiple microphones that are connected to the embedded system using XLR connectors. The analog sound signals are first amplified using programmable gain amplifiers (PGAs) and then digitized before they are provided to the FPGA. The FPGA carries out the computations necessary for the algorithms to detect the acoustic source location in real-time. The system can be used for consumer electronics applications as well as security and defense applications. Keywords: (not provided)
  • Brunel, Jeremie; Pacalet, Renaud; Ouaarab, Salaheddine; Duc, Guillaume, "SecBus, a Software/Hardware Architecture for Securing External Memories," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on , vol., no., pp.277,282, 8-11 April 2014. (ID#:14-1670) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834976&isnumber=6823830 Embedded systems are ubiquitous nowadays. In many cases, they manipulate sensitive applications or data and may be the target of logical or physical attacks. On systems that contain a System-on-Chip connected to an external memory, which is the case of numerous medium to large-size embedded systems, the content of this memory is relatively easy to retrieve or modify. This attack can be performed by probing the memory bus, dumping the content of the memory (cold boot attack) or by exploiting flaws in DMA-capable devices. Thus, if the embedded system manipulates sensitive applications or data, the confidentiality and the integrity of data in memory shall be protected. SecBus is a combined hardware/software architecture that guarantees these two security properties. This paper describes the different software components that are in charge of the management of the SecBus platform, from the early initialization to their use by the sensitive applications. Keywords: (not provided)
  • Zonghua Gu; Chao Wang; Ming Zhang; Zhaohui Wu, "WCET-Aware Partial Control-Flow Checking for Resource-Constrained Real-Time Embedded Systems," Industrial Electronics, IEEE Transactions on , vol.61, no.10, pp.5652,5661, Oct. 2014. (ID#:14-1671) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6718082&isnumber=6809870 Real-time embedded systems in diverse application domains, such as industrial control, automotive, and aerospace, are often safety-critical systems with stringent timing constraints that place strong demands on reliability and fault tolerance. Since fault-tolerance mechanisms inevitably add performance and/or resource overheads, it is important to guarantee a system's real-time constraints despite these overheads. Control-flow checking (CFC) is an effective technique for improving embedded systems' reliability and security by online monitoring and checking of software control flow to detect runtime deviations from the control-flow graph (CFG). Software-based CFC has high runtime overhead, and it is generally not applicable to resource-constrained embedded systems with stringent timing constraints. We present techniques for partial CFC (PCFC), which aims to achieve a tradeoff between runtime overhead, which is measured in terms of increases in worst case execution time (WCET), and fault-detection coverage by selectively instrumenting a subset of basic blocks. Experimental results indicate that PCFC significantly enables reductions of the program WCET compared to full CFC at the cost of reduced fault-detection ratio, thus providing a tunable fault-tolerance technique that can be adapted by the designer to suit the needs of different applications. Keywords: embedded systems; fault diagnosis; flow graphs; software fault tolerance; system monitoring; CFG;P CFC; WCET-aware partial control-flow checking; control-flow graph; embedded systems reliability; fault-detection coverage; fault-detection ratio; fault-tolerance mechanisms; partial CFC; resource-constrained real-time embedded systems; runtime deviations; software control flow checking; worst case execution time; Embedded systems; Fault detection; Fault tolerance; Fault tolerant systems; Instruments; Optimization; Real-time systems; Control flow checking; Control-flow checking (CFC);fault tolerance; fault-tolerance; real-time embedded systems
  • Helfmeier, C.; Boit, C.; Nedospasov, D.; Tajik, S.; Seifert, J.-P., "Physical vulnerabilities of Physically Unclonable Functions," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. (ID#:14-1672) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800564&isnumber=6800201 In recent years one of the most popular areas of research in hardware security has been Physically Unclonable Functions (PUF). PUFs provide primitives for implementing tamper detection, encryption and device fingerprinting. One particularly common application is replacing Non-volatile Memory (NVM) as key storage in embedded devices like smart cards and secure microcontrollers. Though a wide array of PUF have been demonstrated in the academic literature, vendors have only begun to roll out PUFs in their end-user products. Moreover, the improvement to overall system security provided by PUFs is still the subject of much debate. This work reviews the state of the art of PUFs in general, and as a replacement for key storage in particular. We review also techniques and methodologies which make the physical response characterization and physical/digital cloning of PUFs possible. Keywords: SRAM chips; NVM; PUF; device fingerprinting; digital cloning; encryption; nonvolatile memory; physical cloning; physical response characterization; physical vulnerabilities; physically unclonable functions; secure microcontrollers; smart cards; tamper detection; Encryption; Hardware; Integrated circuits; Inverters; SRAM cells
  • Patrick Koeberl, Steffen Schulz, Ahmad-Reza Sadeghi, Vijay Varadharajan, "TrustLite: a Security Architecture For Tiny Embedded Devices," EuroSys '14 Proceedings of the Ninth European Conference on Computer Systems, April 2014, Article No. 10. (ID#:14-1673) Available at: http://dl.acm.org/citation.cfm?id=2592798.2592824&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 Embedded systems are increasingly pervasive, interdependent and in many cases critical to our everyday life and safety. Tiny devices that cannot afford sophisticated hardware security mechanisms are embedded in complex control infrastructures, medical support systems and entertainment products [51]. As such devices are increasingly subject to attacks, new hardware protection mechanisms are needed to provide the required resilience and dependency at low cost. In this work, we present the TrustLite security architecture for flexible, hardware-enforced isolation of software modules. We describe mechanisms for secure exception handling and communication between protected modules, enabling seamless interoperability with untrusted operating systems and tasks. TrustLite scales from providing a simple protected firmware runtime to advanced functionality such as attestation and trusted execution of userspace tasks. Our FPGA prototype shows that these capabilities are achievable even on low-cost embedded systems. Keywords: (not provided)
  • Lucas Davi, Patrick Koeberl, Ahmad-Reza Sadeghi, "Hardware-Assisted Fine-Grained Control-Flow Integrity: Towards Efficient Protection of Embedded Systems Against Software Exploitation," DAC '14 Proceedings of the The 51st Annual Design Automation Conference, June 2014. (ID#:14-1674) Available at: http://dl.acm.org/citation.cfm?id=2593069.2596656&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 Embedded systems have become pervasive and are built into a vast number of devices such as sensors, vehicles, mobile and wearable devices. However, due to resource constraints, they fail to provide sufficient security, and are particularly vulnerable to runtime attacks (code injection and ROP). Previous works have proposed the enforcement of control-flow integrity (CFI) as a general defense against runtime attacks. However, existing solutions either suffer from performance overhead or only enforce coarse-grain CFI policies that a sophisticated adversary can undermine. In this paper, we tackle these limitations and present the design of novel security hardware mechanisms to enable fine-grained CFI checks. Our CFI proposal is based on a state model and a per-function CFI label approach. In particular, our CFI policies ensure that function returns can only transfer control to active call sides (i.e., return landing pads of functions currently executing). Further, we restrict indirect calls to target the beginning of a function, and lastly, deploy behavioral heuristics for indirect jumps. Keywords: (not provided)
  • Shabir A. Parah, Javaid A. Sheikh, Abdul M. Hafiz, G. M. Bhat, "Data Hiding In Scrambled Images: A New Double Layer Security Data Hiding Technique," Computers and Electrical Engineering , Volume 40 Issue 1, January, 2014, (Pages 70-82). (ID#:14-1675) Available at: http://dl.acm.org/citation.cfm?id=2577586.2577707&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 The contemporary multimedia and communication technology has made it possible to replicate and distribute digital media easier and faster. This ease of availability causes the problem of exposing transmitted digital data on the network with the risk of being copied or intercepted illegally. Many cryptographic techniques are in vogue to encrypt the data before transmission to avert any security problems. However, disguised appearance of the encrypted data makes the adversary suspicious and increases the chances of malicious attack. In such a scenario data hiding has received significant attention as an alternate way to ensure data security. This paper presents a data hiding technique based on the concepts of scrambling and pseudorandom data hiding; to provide a data hiding system with two layer security to the embedded data, and good perceptual transparency of the stego images. The proposed system uses the novel concept of embedding the secret data in scrambled (encrypted) cover images. The data embedding is carried out in the Intermediate Significant and least significant bit planes of encrypted image at the predetermined locations pointed to by Pseudorandom Address Space (PAS) and Address Space Direction Pointer (ASDP). Experimental results prove the efficacy of scheme viz-a-viz various parameters of interest. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Internet of Things

Internet of Things



The term Internet of Things (IoT) refers to advanced connectivity of of the Internet with devices, systems and services that include both machine-to-machine communications (M2M) and a variety of protocols, domains and applications. Since the concept incorporates literally billions of devices, the security implications are huge. The articles presented here include some of the work presented at the World Forum on the internet of Things in March 2014. In the first six months of 2014, more than 300 articles have been published globally on IoT. This selection focuses on security and security research.

  • Skarmeta, A.F.; Hernandez-Ramos, J.L.; Moreno, M.V., "A Decentralized Approach For Security And Privacy Challenges In The Internet Of Things," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.67,72, 6-8 March 2014. (ID#:14-1568) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803122&isnumber=6803102 The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases. Keywords: Internet of Things; authorization; computer network security; data privacy; digital signatures; personal area networks; public key cryptography;6LoWPAN;CoAP resources; ECDSA; Internet of Things; IoT deployment; IoT stakeholders; distributed capability-based access control mechanism; elliptic curve digital signature algorithm; information leakage; lightweight token; public key cryptography; security challenges; sensitive data management; user privacy; wireless technologies; Authentication; Authorization; Cryptography;Internet;Privacy;6LoWPAN;Internet of Things; Privacy; Security; cryptographic primitives; distributed access control
  • Singh, D.; Tripathi, G.; Jara, A.J., "A survey of Internet-of-Things: Future vision, Architecture, Challenges And Services," Internet of Things (WF-IoT), 2014 IEEE World Forum on, vol., no., pp.287,292, 6-8 March 2014. (ID#:14-1569) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803174&isnumber=6803102 Internet-of-Things (IoT) is the convergence of Internet with RFID, Sensor and smart objects. IoT can be defined as "things belonging to the Internet" to supply and access all of real-world information. Billions of devices are expected to be associated into the system and that shall require huge distribution of networks as well as the process of transforming raw data into meaningful inferences. IoT is the biggest promise of the technology today, but still lacking a novel mechanism, which can be perceived through the lenses of Internet, things and semantic vision. This paper presents a novel architecture model for IoT with the help of Semantic Fusion Model (SFM). This architecture introduces the use of Smart Semantic framework to encapsulate the processed information from sensor networks. The smart embedded system is having semantic logic and semantic value based Information to make the system an intelligent system. This paper presents a discussion on Internet oriented applications, services, visual aspect and challenges for Internet of things using RFID, 6lowpan and sensor networks. Keywords: Internet of Things; radiofrequency identification; Internet oriented applications; Internet-of-Things; IoT; RFID; SFM; real-world information; semantic fusion model; semantic logic; semantic value based Information; smart embedded system; smart objects; smart semantic framework; Computer architecture; Internet; Logic gates; Monitoring; Radiofrequency identification; Semantics; Wireless sensor networks;6lowpan;Architecture;Internet Services; Internet-of-Things; Semantic Web; Sensor Networks
  • Copigneaux, B., "Semi-autonomous, Context-Aware, Agent Using Behaviour Modelling And Reputation Systems To Authorize Data Operation In The Internet Of Things," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.411,416, 6-8 March 2014. (ID#:14-1570) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803201&isnumber=6803102 In this paper we address the issue of gathering the "informed consent" of an end user in the Internet of Things. We start by evaluating the legal importance and some of the problems linked with this notion of informed consent in the specific context of the Internet of Things. From this assessment we propose an approach based on a semi-autonomous, rule based agent that centralize all authorization decisions on the personal data of a user and that is able to take decision on his behalf. We complete this initial agent by integrating context-awareness, behavior modeling and community based reputation system in the algorithm of the agent. The resulting system is a "smart" application, the "privacy butler" that can handle data operations on behalf of the end-user while keeping the user in control. We finally discuss some of the potential problems and improvements of the system. Keywords: Internet of Things; authorisation; ubiquitous computing; Internet of Things; authorization decisions; authorize data operation; behavior modeling; behaviour modelling; community based reputation system; context-awareness; personal data; privacy butler; specific context; Authorization; Communities; Context; Data privacy; Europe; Internet; Privacy; Informed consent; agent; authorization; behaviour modelling; context-aware; data operation; reputation systems
  • Guo Xie-Chao, "The Research and Application of PCA Algorithm Based Recognition Technology in the Internet of Things," Measuring Technology and Mechatronics Automation (ICMTMA), 2014 Sixth International Conference on , vol., no., pp.737,740, 10-11 Jan. 2014. (ID#:14-1571) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802799&isnumber=6802614 With the quick development of technology, the Internet of things is becoming one of the most important components in our society. The Internet of things can make everything connected with Internet, and it is convenient to recognize and manage. In the process of recognizing the things in the Internet of Things, it will appear irrelative and vague data, and we could not analyze and identify effectively. Focusing on the work of analyzing data, we propose PCA algorithm based technology in this paper, and apply it into works in the Internet of things. Through the theory and experimental results, we can prove that the technology we propose can select main information from prolix data, and complete the recognition works. Keywords: Internet of Things; data analysis; pattern recognition; principal component analysis ;Internet of Things; PCA algorithm based recognition technology; data analysis; data recognition; principal component analysis; prolix data; Automation; Mechatronics; PCA; The Internet of things; pattern recognition; recognition technology
  • Cherrier, S.; Ghamri-Doudane, Y.M.; Lohier, S.; Roussel, G., "Fault-recovery and coherence in Internet of Things choreographies," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.532,537, 6-8 March 2014. (ID#:14-1572) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803224&isnumber=6803102 Facilitating the creation of Internet of Things (IoT) applications is a major concern to increase its development. D-LITe, our previous work, is a framework for that purpose. In D-LITe, Objects are considered as part of a whole application. They offer a REST web service that describes Object capabilities, receives the logic to be executed, and interacts with other stakeholders. Then, the complete application is seen as a choreography dynamically deployed on various objects. But the main issue of choreographies is the loss of coherence. Because of their unreliability, some networks used in IoT may introduce de-synchronization between Objects, leading to errors and failures. In this paper, we propose a solution to re-introduce coherence in the application, in order to keep the advantages of choreography while dealing with this main issue. An overlay of logical check-points at the application layer defines links between the coherent states of a set of objects and triggers re-synchronization messages. Correcting statements are thus spread through the network, which enables fault recovery in Choreographies. This paper ends with a comparison between the checking cost and the reliability improvement. Keywords: Internet of Things; Web services; system recovery; Internet of Things choreographies; IoT applications; REST Web service; fault coherence; fault recovery; object capabilities; reliability improvement; resynchronization messages; Coherence; Error analysis; Hardware; Radiation detectors; Reliability; Web services; Choreography; Fault-recovery; Fault-tolerance; Internet of Things
  • Nitti, M.; Girau, R.; Atzori, L., "Trustworthiness Management in the Social Internet of Things," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.5, pp.1253,1266, May 2014. (ID#:14-1573) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547148&isnumber=6814899 The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange. Keywords: Communication/Networking and Information Technology; Computer Systems Organization; Distributed Systems; General; Internet of things; social networks; trustworthiness management
  • Chen Jun; Chen Chi, "Design of Complex Event-Processing IDS in Internet of Things," Measuring Technology and Mechatronics Automation (ICMTMA), 2014 Sixth International Conference on , vol., no., pp.226,229, 10-11 Jan. 2014. (ID#:14-1574) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802673&isnumber=6802614 With the development of Internet of Things (IoT), there have been more and more services and applications deployed in physical spaces and information systems. Massive number of situation-aware sensors and devices are embedded in IoT environments, which produce huge amounts of data continuously for the IoT systems and platforms. Processing these data stream generated by the IoT networks with different patterns has raised new challenges for the real-time performance of intrusion detection system (IDS) in IoT environments, which has to react quickly to the hacking attacks and malicious activities to IoT. In recent years, Complex Event Processing (CEP) technology provides new solutions in the field of complex pattern identifications and real-time data processing, which can be used to improve the performance of traditional IDS in IoT environments. IDS integrated with CEP can be used to deal with patterns among events and process large volumes of messages with low latency. In this paper we proposed an event-processing IDS architecture in IoT environments on the basis of security requirements analysis for IDS. Then the implementation details for real-time event processing are also proposed, which is developed by Esper, a CEP engine for complex event processing and event series analysis. Keywords: Internet; Internet of Things; security of data; CEP technology; Internet of Things; IoT networks; complex event-processing IDS design; data stream processing; event series analysis; information systems; intrusion detection system; physical spaces; real-time event processing; security requirements analysis; situation-aware sensors; Automation; Mechatronics
  • Kai Kang; Zhibo Pang; Li Da Xu; Liya Ma; Cong Wang, "An Interactive Trust Model for Application Market of the Internet of Things," Industrial Informatics, IEEE Transactions on , vol.10, no.2, pp.1516,1526, May 2014. (ID#:14-1575) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742593&isnumber=6809862 The Internet of Things (IoT) application market (IAM) is supposed to be an effective approach for service distribution in the era of IoT. To protect the privacy and security of users, a systematic mechanism to determine the trustworthiness of the applications in the IAM is demanded. In this paper, an interactive trust model (ITM) is proposed based on interaction between application market and end users. In this model, application trustworthiness (AT) is quantitatively evaluated by the similarity between the application's behavior and the behavior expected by the user. In particular, by using the evaluation vector and feedback vector feature of application in the marketplace and behavior of applications on end devices can be exchanged in mathematical form to establish the connection between market and users. Behavior-based detecting agent on a users' device gives strong evidence about what applications have done to your privacy and security issues. Indicators derived by this model are presented in the market along with the application, and it helps users to more efficiently select the most appropriate application from the market. Keywords: Internet; Internet of Things; IAM; ITM; Internet of Things; IoT application market; application trustworthiness; behavior-based detecting agent; feedback vector feature; interactive trust model; user privacy; user security; Ecosystems; Mathematical model; Mobile communication; Privacy; Security; Smart phones; Vectors; Evaluation vector; Internet of Things (IoT);IoT application market (IAM);feedback vector; interactive trust model (ITM)
  • Antonio J. Jara, Socrates Varakliotis, Antonio F. Skarmeta, Peter Kirstein, "Extending the Internet of Things to the Future Internet through IPv6 Support," Mobile Information Systems - Internet of Things, Volume 10 Issue 1, January 2014, ( Pages 3-17). (ID#:14-1576) Available at: http://dl.acm.org/citation.cfm?id=2590365.2590367&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This work takes a look at integrating the Internet of Things (IoT) into the Internet by extending, adapting, and bridging using IPv6, while still ensuring backwards compatibility with legacy networks. The authors of this paper explore an extended Internet stack with adaptation layers, enabling ubiquitous access for all applications and services. Keywords: Backwards Compatibility, Internet Of Things, Internetworking, Ipv6, Network Communications, System Architecture, Wireless Sensor Networks
  • Jordi Mongay Batalla, Piotr Krawiec, "Conception Of ID Layer Performance At The Network Level For Internet Of Things," Personal and Ubiquitous Computing , Volume 18 Issue 2, February 2014. (ID#:14-1577) Available at: http://dl.acm.org/author_page.cfm?id=87259682657&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 The authors of this paper propose an original ID layer architecture for Internet of Things (IoT), expounding on human-readable, hierarchical ID-based unified addressing for connects devices and services. . Keywords: Future Internet, ID-based routing, Internet of Things, Name data networking, Networking Named Content
  • Julien Montavont, Damien Roth, Thomas Noel, "Mobile IPv6 in Internet of Things: Analysis, Experimentations and Optimizations," Ad Hoc Networks, Volume 14, March, 2014, (Pages 15-25). (ID#:14-1578) Available at: http://dl.acm.org/citation.cfm?id=2580129.2580640&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This work explores the projected impact of Internet of Things (IoT) on ubiquitous IP connectivity and the corresponding mobility management protocol used. The authors of this publication propose a study of Mobile IPv6 over Low-Power Wireless Personal Area Networks (6LoWPAN), a standard that facilitates connection to IPv6 networks for constrained devices. This paper also details a proposed mechanism for detecting movement, based on passive overhearings, as current standard procedures cannot be applied without modification. Keywords: 6LoWPAN, Internet of Things, Mobile IPv6, Mobility support, WSN
  • Keoh, S.; Kumar, S.; Tschofenig, H., "Securing the Internet of Things: A Standardization Perspective," Internet of Things Journal, IEEE , vol.PP, no.99, pp.1,1, 16 May 2014. (ID#:14-1579) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6817545&isnumber=6702522 The Internet-of-Things (IoT) is the next wave of innovation that promises to improve and optimize our daily life based on intelligent sensors and smart objects working together. Through IP connectivity, devices can now be connected to the Internet, thus allowing them to be read, controlled and managed at any time and any place. Security is an important aspect for IoT deployments. However, proprietary security solutions do not help in formulating a coherent security vision to enable IoT devices to securely communicate with each other in an interoperable manner. This paper gives an overview of the efforts in the Internet Engineering Task Force (IETF) to standardize security solutions for the IoT ecosystem. We first provide an in-depth review of the communication security solutions for IoT, specifically the standard security protocols to be used in conjunction with the Constrained Application Protocol (CoAP), an application protocol specifically tailored to the needs of adapting to the constraints of IoT devices. Since Datagram Transport Layer Security (DTLS) has been chosen as the channel security underneath CoAP, this paper also discusses the latest standardization efforts to adapt and enhance the DTLS for IoT applications. This includes the use of (i) raw public key in DTLS, (ii) extending DTLS Record Layer to protect group (multicast) communication, and (iii) profiling of DTLS for reducing the size and complexity of implementations on embedded devices. We also provide an extensive review of compression schemes that are being proposed in IETF to mitigate message fragmentation issues in DTLS. Keywords: (not available)
  • Gu, Lize; Wang, Jingpei; Sun, Bin, "Trust management mechanism for Internet of Things," Communications, China , vol.11, no.2, pp.148,156, Feb 2014. (ID#:14-1580) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821746&isnumber=6821729 Trust management has been proven to be a useful technology for providing security service and as a consequence has been used in many applications such as P2P, Grid, ad hoc network and so on. However, few researches about trust mechanism for Internet of Things (IoT) could be found in the literature, though we argue that considerable necessity is held for applying trust mechanism to IoT. In this paper, we establish a formal trust management control mechanism based on architecture modeling of IoT. We decompose the IoT into three layers, which are sensor layer, core layer and application layer, from aspects of network composition of IoT. Each layer is controlled by trust management for special purpose: self-organized, affective routing and multi-service respectively. And the final decision-making is performed by service requester according to the collected trust information as well as requester' policy. Finally, we use a formal semantics-based and fuzzy set theory to realize all above trust mechanism, the result of which provides a general framework for the development of trust models of IoT. Keywords: Decision making; Internet; Legged locomotion; Multiplexing; Security; Internet of Things; formal semantics; trust decision making; trust management
  • Stankovic, J.A., "Research Directions for the Internet of Things," Internet of Things Journal, IEEE , vol.1, no.1, pp.3,9, Feb. 2014. (ID#:14-1581) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6774858&isnumber=6810798 Many technical communities are vigorously pursuing research topics that contribute to the Internet of Things (IoT). Nowadays, as sensing, actuation, communication, and control become even more sophisticated and ubiquitous, there is a significant overlap in these communities, sometimes from slightly different perspectives. More cooperation between communities is encouraged. To provide a basis for discussing open research problems in IoT, a vision for how IoT could change the world in the distant future is first presented. Then, eight key research topics are enumerated and research problems within these topics are discussed. Keywords: Actuators; Internet; Medical services; Privacy; Real-time systems; Security; Sensors; Cyber physical systems; Internet of Things (IoT);mobile computing; pervasive computing; wireless sensor networks
  • Gyrard, A.; Bonnet, C.; Boudaoud, K., "Enrich machine-to-machine data with semantic web technologies for cross-domain applications," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.559,564, 6-8 March 2014. (ID#:14-1582) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803229&isnumber=6803102 The Internet of Things, more specifically, the Machine-to-Machine (M2M) standard enables machines and devices such as sensors to communicate with each other without human intervention. The M2M devices provide a great deal of M2M data, mainly used for specific M2M applications such as weather forecasting, healthcare or building automation. Existing applications are domain-specific and use their own descriptions of devices and measurements. A major challenge is to combine M2M data provided by these heterogeneous domains and by different projects. It is really a difficult task to understand the meaning of the M2M data to later reason about them. We propose a semantic-based approach to automatically combine, enrich and reason about M2M data to provide promising cross-domain M2M applications. A proof-of-concept to validate our approach is published online http://sensormeasurement.appspot.com/ . Keywords: Internet of Things; data analysis; semantic Web; Internet of Things;M2M devices;M2M standard; building automation; cross-domain applications; healthcare; human intervention; machine-to-machine data; machine-to-machine standard; semantic Web technology; weather forecasting; Diseases; Meteorology; Ontologies; Semantic Web; Semantics; Sensors; Temperature measurement; Cross-Domain Applications; Domain Ontologies; Internet of Things; Linked Open Data; Linked Open Rules; Linked Open Vocabularies; Machine-to-Machine (M2M); Naturopathy; Reasoning; Rules; SWRL; Semantic Web of Things; Semantic Web technologies
  • Puliafito, A., "SensorCloud: An Integrated System for Advanced Multi-risk Management," Network Cloud Computing and Applications (NCCA), 2014 IEEE 3rd Symposium on , vol., no., pp.1,8, 5-7 Feb. 2014. (ID#:14-1583) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786755&isnumber=6786745 This paper intends to contribute to the design of a pervasive infrastructure where new generation services interact with the surrounding environment, thus creating new opportunities for contextualization and geo-awareness. The architecture proposal is based on Sensor Web Enablement standard specifications and makes use of the Contiki Operating System for accomplishing the Internet of Things. We present both "data driven" and "device driven" solutions introducing the concept of Sensor and Actuator as a Service (SAaaS). Smart cities are assumed as the reference scenario. We present a data driven application specifically designed to monitor an industrial site with particular attention to power consumption. We also introduce an example of SAaaS service related to traffic monitoring. Keywords: Internet of Things; cloud computing; distributed sensors; power consumption; power engineering computing; risk management; traffic engineering computing; Contiki Operating System; Internet of Things; SAaaS; Sensor Web Enablement standard specifications; Sensor and Actuator as a Service; SensorCloud; advanced multirisk management; contextualization; data driven application; data driven solution; device driven solution; geo-awareness; industrial site monitoring; integrated system; new generation services; pervasive infrastructure design; power consumption; smart cities; traffic monitoring; Actuators; Cities and towns; Cloud computing; Computer architecture; Monitoring; Sensors; Internet of Things; cloud computing; sensor networks; smart cities
  • Duan, J.; Gao, D.; Yang, D.; Foh, C.H.; Chen, H., "An Energy-Aware Trust Derivation Scheme With Game Theoretic Approach in Wireless Sensor Networks for IoT Applications," Internet of Things Journal, IEEE , vol.1, no.1, pp.58,69, Feb. 2014. (ID#:14-1584) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779650&isnumber=6810798 Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks. Keywords: Computational modeling; Electronic mail; Energy consumption; Games; Internet; Security; Wireless sensor networks; Energy awareness; Internet of Things (IoT);game theory; security ;trust evaluation; wireless sensor network (WSN)
  • Piro, G.; Boggia, G.; Grieco, L.A., "A Standard Compliant Security Framework for IEEE 802.15.4 Networks," Internet of Things (WF-IoT), 2014 IEEE World Forum on, vol., no., pp.27,30, 6-8 March 2014. (ID#:14-1585) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803111&isnumber=6803102 The IEEE 802.15.4 standard is widely recognized as one of the most successful enabling technologies for short range low rate wireless communications. It covers all the details related to the MAC and PHY layers of the protocol stack. In addition, it supports the possibility to protect MAC packets by using symmetric-key cryptography techniques and it offers several security options. But, at the same time, the standard relies on upper layers to orchestrate the usage of the plethora of security profiles and configuration settings it makes available, as well as to handle the creation and the exchange of encryption keys. In support of this functionality, this work describes a standard compliant security framework aimed at proposing: (i) different kind of security architectures, (ii) an efficient mechanism for initializing a secure IEEE 802.15.4 domain, and (iii) a lightweight mechanism to negotiate link keys among devices. Keywords: Zigbee; cryptography ;IEEE 802.15.4 networks; MAC layers; PHY layers; encryption keys; lightweight mechanism; security architectures; short range low rate wireless communications; standard compliant security framework; symmetric-key cryptography techniques; Authentication; Cryptography; IEEE 802.15 Standards; Internet; Protocols; IEEE 802.15.4;key management protocol; security framework
  • Riahi, A.; Natalizio, E.; Challal, Y.; Mitton, N.; Iera, A., "A systemic and cognitive approach for IoT security," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.183,188, 3-6 Feb. 2014. (ID#:14-1586) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785328&isnumber=6785290 The Internet of Things (IoT) will enable objects to become active participants of everyday activities. Introducing objects into the control processes of complex systems makes IoT security very difficult to address. Indeed, the Internet of Things is a complex paradigm in which people interact with the technological ecosystem based on smart objects through complex processes. The interactions of these four IoT components, person, intelligent object, technological ecosystem, and process, highlight a systemic and cognitive dimension within security of the IoT. The interaction of people with the technological ecosystem requires the protection of their privacy. Similarly, their interaction with control processes requires the guarantee of their safety. Processes must ensure their reliability and realize the objectives for which they are designed. We believe that the move towards a greater autonomy for objects will bring the security of technologies and processes and the privacy of individuals into sharper focus. Furthermore, in parallel with the increasing autonomy of objects to perceive and act on the environment, IoT security should move towards a greater autonomy in perceiving threats and reacting to attacks, based on a cognitive and systemic approach. In this work, we will analyze the role of each of the mentioned actors in IoT security and their relationships, in order to highlight the research challenges and present our approach to these issues based on a holistic vision of IoT security. Keywords: Internet of Things; data privacy; security of data; IoT security; cognitive dimension; control processes; internet of things; object autonomy; privacy protection; safety guarantee; smart objects; systemic dimension ;technological ecosystem; Context; Data privacy; Ecosystems; Privacy; Reliability; Safety; Security
  • Israa Alqassem, "Privacy and Security Requirements Framework For The Internet Of Things (IoT)," ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering May 2014, (Pages 739-741). (ID#:14-1587)Available at: http://dl.acm.org/citation.cfm?id=2591062.2591201&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This article strives to address the earliest planning stages of Internet of Things (IoT), in terms of projected privacy and security requirements. In order to plan for a mission-critical IoT, the authors of this paper advise developing an engineering framework for privacy and security requirements. Keywords: Internet of Things, RFID, privacy, requirements elicitation, requirements engineering, security
  • Jinshu Su, Dan Cao, Baokang Zhao, Xiaofeng Wang, Ilsun You, "ePASS: An Expressive Attribute-Based Signature Scheme With Privacy And An Unforgeability Guarantee for the Internet of Things," Future Generation Computer Systems, Volume 33, April, 2014, (Pages 11-18). (ID#:14-1588) Available at: http://dl.acm.org/citation.cfm?id=2576237.2576308&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This article addresses vulnerabilities in user privacy, and distinct need for policy-focused authentication, for the emerging Internet of Things (IoT). The authors of this article present ePASS, an Attribute-Based Signature (ABS) alternative which effectively restricts users from forging signatures with non-existent or feigned attributes. Only a user, who will remain anonymous, with attributes satisfying the policy may affirm the message. This method sees decreased computational cost and signature size. Keywords: Attribute-based signature, Diffie-Hellman, Internet of Things, Policy, Privacy, Security, Unforgeability
  • Lee W. Lerner, Zane R. Franklin, William T. Baumann, Cameron D. Patterson, "Using High-Level Synthesis And Formal Analysis To Predict And Preempt Attacks On Industrial Control Systems," FPGA '14 Proceedings of the 2014 ACM/SIGDA International Symposium On Field-Programmable Gate Arrays, Feb 2014, (Pages 209-212). (ID#:14-1589) Available at: http://dl.acm.org/citation.cfm?id=2554688.2554759&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 Industrial control systems (ICSes) have the conflicting requirements of security and network access. In the event of large-scale hostilities, factories and infrastructure would more likely be targeted by computer viruses than the bomber squadrons used in WWII. ICS zero-day exploits are now a commodity sold on brokerages to interested parties including nations. We mitigate these threats not by bolstering perimeter security, but rather by assuming that potentially all layers of ICS software have already been compromised and are capable of launching a latent attack while reporting normal system status to human operators. In our approach, application-specific configurable hardware is the final authority for scrutinizing controller commands and process sensors, and can monitor and override operations at the lowest (I/O pin) level of a configurable system-on-chip platform. The process specifications, stability-preserving backup controller, and switchover logic are specified and formally verified as C code, and synthesized into hardware to resist software reconfiguration attacks. To provide greater assurance that the backup controller can be invoked before the physical process becomes unstable; copies of the production controller task and plant model are accelerated to preview the controller's behavior in the near future. Keywords: formal analysis, high-level synthesis, industrial control systems, reconfigurable platform, security

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Multicore Computing Security (Update)

Multicore Security Computing (Update)


As high performance computing has evolved into larger and faster computing solutions, new approaches to security have been identified. The articles cited here focus on security issues related to multicore environments. These articles focus on a new secure processor that obfuscates its memory access trace, proactive dynamic load balancing on multicore systems, and, an experimental OS tailored to multicore processors of interest in signal processing. These materials were published in the first half of 2014.

  • Marat Zhanikeev, "A software Design And Algorithms For Multicore Capture In Data Center Forensics," SFCS '14 Proceedings of the 2nd International Workshop On Security And Forensics In Communication Systems, June 2014, Pages 11-18. (ID#:14-1699) URL: http://dl.acm.org/citation.cfm?id=2598918.2598923&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 or http://dx.doi.org/10.1145/2598918.2598923 With rapid dissemination of cloud computing, data centers are quickly turning into platforms that host highly heterogeneous collections of services. Traditional approach to security and performance management finds it difficult to cope in such environments. Specifically, it is becoming increasingly difficult to capture and process all the necessary information at data centers in real time, where packet capture at data center gateways can serve as a practical example. This paper proposes a generic design for capturing and processing information on multicore architectures. The two main parts of the proposal are (1) the optimization formulation for distributing tasks across cores and (2) practical design and implementation of a shared memory which can be used for communication between processes in a non-traditional way that does not require memory locking or message passing. Keywords: data center forensics, information capture, lock-free design, multicore architecture, multicore capture, packet capture, parallel processing, shared memory
  • Ruby Lee, Weidong Shi, Proceedings of the Third Workshop on Hardware and Architectural Support for Security and Privacy, June 2014. (ID#:14-1700) URL: http://dl.acm.org/citation.cfm?id=2611765&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 It is our great pleasure to introduce the technical program for the 3nd International workshop on Hardware and Architectural Support for Security and Privacy (HASP 2014), which will be held in conjunction with the 41st International Symposium on Computer Architecture (ISCA 2014) Minneapolis, MN, USA on June 15, 2014. Although much attention has been directed to the study of security at the system and application levels, security and privacy research focusing on hardware and architecture aspects is at a new frontier. In the era of cloud computing, pervasive intelligent systems, and nano-scale devices, practitioners and researchers have to address new challenges and requirements in order to meet the ever-changing landscape of security research and new demands from consumers, enterprises, governments, defense and other industries. The goal of HASP is to bring together researchers, developers, and practitioners from academia and industry, to share practical insights, experiences and implementations related to all aspects of hardware and architectural support for security and privacy, and to discuss future trends in research and applications. We encourage contributions describing innovative work on hardware and architectural support for trust management, security of cloud computing, smartphones and Internet of Things, FPGA, SOC and multicore security, etc.
  • Bryan Jeffery Parno, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers Association for Computing Machinery and Morgan & Claypool, New York, NY, USA (c)2014. ISBN: 978-1-62705-477-5. (ID#:14-1701) With the increase of digitizing sensitive information, it is it is imperative that we adopt adequate security protections. This mandate conflicts with consumer expectations of commodity computers. With regards to security and features, the author discusses aspects of trust, performance and features of commodity devices and services. Keywords: multicore computing, security
  • Shih-Hao Hung, Po-Hsun Chiu, Chia-Heng Tu, Wei-Ting Chou, Wen-Long Yang, "Message-Passing Programming for Embedded Multicore Signal-Processing Platforms," Journal of Signal Processing Systems, Volume 75 Issue 2, May 2014, Pages 123-139. (ID#:14-1702) Recently, embedded multicore platforms have become popular for signal processing, but software development for such platforms is still very slow. The authors suggest the use ofa standard message-passing programming such as a light-weight MPI to support message passing on popular embedded multicore signal-processing platforms. Keywords: Embedded systems, Message-passing, Multicore, Performance optimization, Signal processing, Software portability
  • Berger, M.; Erlacher, F.; Sommer, C.; Dressler, F., "Adaptive load allocation for combining Anomaly Detectors using controlled skips," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.792,796, 3-6 Feb. 2014. (ID#:14-1703) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785438&isnumber=6785290 Traditional Intrusion Detection Systems (IDS) can be complemented by an Anomaly Detection Algorithm (ADA) to also identify unknown attacks. We argue that, as each ADA has its own strengths and weaknesses, it might be beneficial to rely on multiple ADAs to obtain deeper insights. ADAs are very resource intensive; thus, real-time detection with multiple algorithms is even more challenging in high-speed networks. To handle such high data rates, we developed a controlled load allocation scheme that adaptively allocates multiple ADAs on a multi-core system. The key idea of this concept is to utilize as many algorithms as possible without causing random packet drops, which is the typical system behavior in overload situations. We developed a proof of concept anomaly detection framework with a sample set of ADAs. Our experiments confirm that the detection performance can substantially benefit from using multiple algorithms and that the developed framework is also able to cope with high packet rates. Keywords: multiprocessing systems; real-time systems; resource allocation; security of data; ADA; IDS; adaptive load allocation; anomaly detection algorithm; controlled load allocation; controlled skips; high-speed networks; intrusion detection systems; multicore system; multiple algorithms; real-time detection; resource intensive; unknown attacks; High-speed networks; Intrusion detection; Probabilistic logic; Reliability; Uplink; World Wide Web
  • Kong, J.; Koushanfar, F., "Processor-Based Strong Physical Unclonable Functions With Aging-Based Response Tuning," Emerging Topics in Computing, IEEE Transactions on , vol.2, no.1, pp.16,29, March 2014. (ID#:14-1704) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6656920&isnumber=6824880 A strong physically unclonable function (PUF) is a circuit structure that extracts an exponential number of unique chip signatures from a bounded number of circuit components. The strong PUF unique signatures can enable a variety of low-overhead security and intellectual property protection protocols applicable to several computing platforms. This paper proposes a novel lightweight (low overhead) strong PUF based on the timings of a classic processor architecture. A small amount of circuitry is added to the processor for on-the-fly extraction of the unique timing signatures. To achieve desirable strong PUF properties, we develop an algorithm that leverages intentional post-silicon aging to tune the inter- and intra-chip signatures variation. Our evaluation results show that the new PUF meets the desirable inter- and intra-chip strong PUF characteristics, whereas its overhead is much lower than the existing strong PUFs. For the processors implemented in 45 nm technology, the average inter-chip Hamming distance for 32-bit responses is increased by 16.1% after applying our post-silicon tuning method; the aging algorithm also decreases the average intra-chip Hamming distance by 98.1% (for 32-bit responses). Keywords: Aging; Circuit optimization; Delays; Logic gates; Microprocessors; Multicore processing; Network security; Silicon; Temperature measurement; Circuit aging; Multi-core processor; Negative bias temperature instability; Physically unclonable function; Post-silicon tuning; Secure computing platform; circuit aging; multi-core processor; negative bias temperature instability; postsilicon tuning; secure computing platform
  • Kishore, N.; Kapoor, B., "An efficient parallel algorithm for hash computation in security and forensics applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. (ID#:14-1705) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283 Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine. Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organization; parallel algorithms; probability; OpenMP API;SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature;MD5; Multicore Processors; OpenMP; SHA-1
  • Dean Michael Ancajas, Koushik Chakraborty, Sanghamitra Roy, "Fort-NoCs: Mitigating the Threat of a Compromised NoC," DAC '14 Proceedings of the 51st Annual Design Automation Conference on Design Automation Conference, June 2014, Pages 1-6. (ID#:14-1706) URL: http://dl.acm.org/citation.cfm?id=2593069.2593144&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 or http://dx.doi.org/10.1145/2593069.2593144 In this paper, we uncover a novel and imminent threat to an emerging computing paradigm: MPSoCs built with 3rd party IP NoCs. We demonstrate that a compromised NoC (C-NoC) can enable a range of security attacks with an accomplice software component. To counteract these threats, we propose Fort-NoCs, a series of techniques that work together to provide protection from a C-NoC in an MPSoC. Fort-NoCs's foolproof protection disables covert backdoor activation, and reduces the chance of a successful side-channel attack by "clouding" the information obtained by an attacker. Compared to recently proposed techniques, Fort-NoCs offers a substantially better protection with lower overheads. Keywords: (not provided)
  • Kekai Hu, Tilman Wolf, Thiago Teixeira, Russell Tessier, "System-Level Security for Network Processors with Hardware Monitors," DAC '14 Proceedings of the 51st Annual Design Automation Conference on Design Automation Conference, June 2014, Pages 1-6. (ID#:14-1707) URL: http://dl.acm.org/citation.cfm?id=2593069.2593226&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 or http://dx.doi.org/10.1145/2593069.2593226 New attacks are emerging that target the Internet infrastructure. Modern routers use programmable network processors that may be exploited by merely sending suitably crafted data packets into a network. Hardware monitors that are co-located with processor cores can detect attacks that change processor behavior with high probability. In this paper, we present a solution to the problem of secure, dynamic installation of hardware monitoring graphs on these devices. We also address the problem of how to overcome the homogeneity of a network with many identical devices, where a successful attack, albeit possible only with small probability, may have devastating effects. Keywords: (not provided)
  • Richard L. Moore, Chaitan Baru, Diane Baxter, Geoffrey C. Fox, Amit Majumdar, Phillip Papadopoulos, Wayne Pfeiffer, Robert S. Sinkovits, Shawn Strande, Mahidhar Tatineni, Richard P. Wagner, Nancy Wilkins-Diehr, Michael L. Norman, "Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science," XSEDE '14 Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment, July 2014, Article No. 39. (ID#:14-1708) URL: http://dl.acm.org/citation.cfm?id=2616498.2616540&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 or http://dx.doi.org/10.1145/2616498.2616540 NSF-funded computing centers have primarily focused on delivering high-performance computing resources to academic researchers with the most computationally demanding applications. But now that computational science is so pervasive, there is a need for infrastructure that can serve more researchers and disciplines than just those at the peak of the HPC pyramid. Here we describe SDSC's Comet system, which is scheduled for production in January 2015 and was designed to address the needs of a much larger and more expansive science community-- the "long tail of science". Comet will have a peak performance of 2 petaflop/s, mostly delivered using Intel's next generation Xeon processor. It will include some large-memory and GPU-accelerated nodes, node-local flash memory, 7 PB of Performance Storage, and 6 PB of Durable Storage. These features, together with the availability of high performance virtualization, will enable users to run complex, heterogeneous workloads on a single integrated resource. Keywords: GPU, High performance computing, high throughput computing, parallel file system, science gateways, scientific applications, solid-state drive, user support, virtualization
  • Chia-Che Tsai, Kumar Saurabh Arora, Nehal Bandi, Bhushan Jain, William Jannen, Jitin John, Harry A. Kalodner, Vrushali Kulkarni, Daniela Oliveira, Donald E. Porter, "Cooperation and Security Isolation Of Library Oses For Multi-Process Applications," EuroSys '14 Proceedings of the Ninth European Conference on Computer Systems, April 2014, Article No. 9. (ID#:14-1709) URL: http://dl.acm.org/citation.cfm?id=2592798.2592812&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 or http://dx.doi.org/10.1145/2592798.2592812 Library OSes are a promising approach for applications to efficiently obtain the benefits of virtual machines, including security isolation, host platform compatibility, and migration. Library OSes refactor a traditional OS kernel into an application library, avoiding overheads incurred by duplicate functionality. When compared to running a single application on an OS kernel in a VM, recent library OSes reduce the memory footprint by an order-of-magnitude. Previous library OS (libOS) research has focused on single-process applications, yet many Unix applications, such as network servers and shell scripts, span multiple processes. Key design challenges for a multi-process libOS include management of shared state and minimal expansion of the security isolation boundary. This paper presents Graphene, a library OS that seamlessly and efficiently executes both single and multi-process applications, generally with low memory and performance overheads. Graphene broadens the libOS paradigm to support secure, multi-process APIs, such as copy-on-write fork, signals, and System V IPC. Multiple libOS instances coordinate over pipe-like byte streams to implement a consistent, distributed POSIX abstraction. These coordination streams provide a simple vantage point to enforce security isolation. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Open Systems

Open Systems


Opens systems historically seemed "immune" to cyber-attacks because hackers used the same software. Increasingly, open systems vulnerabilities are being exploited. The seven articles cited here explore various aspects of open systems security, including resource sharing, software specifications, attack vectors and dependability. The first paper, comparing open and closed systems, was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • Joan Feigenbaum, Aaron D. Jaggard, Rebecca N. Wright, "Open vs. Closed Systems for Accountability" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1409) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf This article explores the correspondence between accountability and identity in online activities by surveying principal directed relationships, system identities (nyms), and actions using the aforementioned nyms. Taking into consideration that punishment correlates with accountability, the authors of this paper devised a utility-theoretic framework to map the parallel between violators and the identities used to perform malicious activity. This paper also explores the correlation between bound identity and accountability. Keywords: Accountability, Identity, Utility, Open Systems, closed systems
  • Asberg, M.; Nolte, T.; Behnam, M., "Resource Sharing Using The Rollback Mechanism In Hierarchically Scheduled Real-Time Open Systems" Real-Time and Embedded Technology and Applications Symposium (RTAS), 2013 IEEE 19th , vol., no., pp.129,140, 9-11 April 2013. (ID#:14-1410) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6531086&isnumber=6531071 In this paper we present a new synchronization protocol called RRP (Rollback Resource Policy) which is compatible with hierarchically scheduled open systems and specialized for resources that can be aborted and rolled back. We conduct an extensive event-based simulation and compare RRP against all equivalent existing protocols in hierarchical fixed priority preemptive scheduling; SIRAP (Subsystem Integration and Resource Allocation Policy), OPEN-HSRPnP (open systems version of Hierarchical Stack Resource Policy no Payback) and OPEN-HSRPwP (open systems version of Hierarchical Stack Resource Policy with Payback). Our simulation study shows that RRP has better average-case response-times than the state-of-the-art protocol in open systems, i.e., SIRAP, and that it performs better than OPEN-HSRPnP/OPEN-HSRPwP in terms of schedulability of randomly generated systems. The simulations consider both resources that are compatible with rollback as well as resources incompatible with rollback (only abort), such that the resource-rollback overhead can be evaluated. We also measure CPU overhead costs (in VxWorks) related to the rollback mechanism of tasks and resources. We use the eXtremeDB (embedded real-time) database to measure the resource-rollback overhead1. Keywords: open systems; protocols; real-time systems; resource allocation; scheduling; synchronisation; CPU overhead cost; OPEN-HSRPnP protocol; RRP synchronization protocol; SIRAP protocol; average-case response time; embedded realtime database; event-based simulation; hierarchical fixed priority preemptive scheduling; open systems version of hierarchical stack resource policy with payback; realtime open system; resource sharing; resource-rollback overhead; rollback mechanism; rollback resource policy; subsystem integration and resource allocation policy; hierarchical scheduling; open systems; real-time systems; resource sharing; synchronization protocol
  • Bahtijar Vogel, "Towards Open Architecture System" Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering August 2013 (Pages 731-734) (ID#:14-1411) Available at: http://dl.acm.org/citation.cfm?id=2491411.2492407&coll=DL&dl=GUIDE&CFID=449793911&CFTOKEN=46643839 or http://dx.doi.org/10.1145/2491411.2492407 The use of diverse standards while developing web and mobile technologies brings new challenges when it comes to flexibility, interoperability, customizability and extensibility of the software systems. In addition, such systems in most of the cases are closed, thus make the development and customization process for system designers, developers and end-users a challenging effort. All these developments require further research attention. This work addresses these challenges from open system architecture perspective. The proposed approach is based on practical development efforts, and theoretical research including state of the art projects and definitions related to open architectures that we surveyed. The initial results indicate that a combination of service-oriented approaches with open source components and open standard data formats pave the way towards an open, extensible architecture. The core contribution of this research will be (a) an open architecture model and (b) the developed system itself based on the model, and (c) the benefits of applying open architecture approaches throughout the development processes. Keywords: Open architecture, customizability, evolvability, extensibility, flexibility, model, validation, web and mobile software
  • Galina M. Antonova , "Simulation of Information Flow on Transport Layer of Open System Interconnection-Model " EUROSIM '13 Proceedings of the 2013 8th EUROSIM Congress on Modelling and Simulation September 2013 (Pages 567-572) (ID#:14-1412) Available at: http://dl.acm.org/citation.cfm?id=2547778.2547818&coll=DL&dl=GUIDE&CFID=449793911&CFTOKEN=46643839 or http://dx.doi.org/10.1109/EUROSIM.2013.100 Network protocols on transport layer of Open System Interconnection (OSI) model of data transmission solve very difficult problems for delivery all messages in necessary places at designated time. There are no accurate mathematical methods for searching of solution for different problems of optimization for dynamical network characteristics. Some problems may be successfully solved by means of modeling, simulation optimization and other methods of modern cybernetics. The dynamical character of network causes problem of sufficient traffic capacity of data transmission system and admissible time delay because of variable volume of information flow or variable channel load. Sometimes the quantity of users may be so large, that network operation system may give a refuse and a set of messages may be lost. It is very important task to test network for stability work in case of different kinds of noise both amplitude and a distribution density. The main goal of the paper is consideration of one of the numerous ways for preliminary testing of network work with taking into account its dynamical features. Keywords: Modeling, Monte-Carlo simulation, Information technologies, algorithm , Open Systems
  • Walt Scacchi, Thomas A. Alspaugh, "Processes in Securing Open Architecture Software Systems" Proceedings of the 2013 International Conference on Software and System Process May 2013 (Pages 126-135) (ID#:14-1413) Available at: http://dl.acm.org/citation.cfm?id=2486046.2486068&coll=DL&dl=GUIDE&CFID=449793911&CFTOKEN=46643839 or http://doi.acm.org/10.1145/2486046.2486068 Our goal is to identify and understand issues that arise in the development and evolution processes for securing open architecture (OA) software systems. OA software systems are those developed with a mix of closed source and open source software components that are configured via an explicit system architectural specification. Such a specification may serve as a reference model or product line model for a family of concurrently sustained OA system versions/variants. We employ a case study focusing on an OA software system whose security must be continually sustained throughout its ongoing development and evolution. We limit our focus to software processes surrounding the architectural design, continuous integration, release deployment, and evolution found in the OA system case study. We also focus on the role automated tools, software development support mechanisms, and development practices play in facilitating or constraining these processes through the case study. Our purpose is to identify issues that impinge on modeling (specification) and integration of these processes, and how automated tools mediate these processes, as emerging research problems areas for the software process research community. Finally, our study is informed by related research found in the prescriptive versus descriptive practice of these processes and tool usage in studies of conventional and open source software development projects. Keywords: Open architecture, configuration, continuous software development, process integration, process modeling, security
  • Yokote, Y.; Nagayama, T., "Dependability of open systems," Software Reliability Engineering Workshops (ISSREW), 2013 IEEE International Symposium on, vol., no., pp.25,35, 4-7 Nov. 2013. (ID#:14-1415) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6688859&isnumber=6688826 This presentation demonstrates an innovative way to build a target system maintaining its dependability in an open system environment, where the boundary of the target system is blurred in the sense that interaction with its surrounding environment is always altered due to several environmental changes such as business objectives, stakeholders' requirements, regulations, and performance requirements. What we call open systems is inherently providing such a nature, and recent IT systems particularly including cloud-based services are categorized in it. Keywords: cloud computing; open systems; software maintenance; software reliability; IT systems; business objectives; cloud-based services; dependability maintenance; open system dependability; performance requirements; Business; Databases; Educational institutions; Industries; Open systems; Safety; Standards

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Oscillating Behavior

Oscillating Behaviors


The three articles cited here examine oscillating circuits. The first paper was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • "Analyzing an Adaptive Reputation Metric for Anonymity Systems". Anupam Das, Nikita Borisov, and Matthew Caesar. HOT SoS 2014 (To be published 2014 in Journals of the ACM) (ID#:14-1366) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf This paper focuses on low-latency anonymity systems, which utilizes innumerable intermediary relays to forward traffic. Tor, a free software with the purpose of obscuring user location and identity, is a popular example of an anonymity system. Such relays, in addition to unreliability, may also be vulnerable to maliciously coordinated relay failures, which jeopardizes anonymity. The authors of this paper propose the use of a reputation matrix, based on users' past experiences, to help map relay reliability. The presented framework uses a proportional-integral-derivative (PID) reputation metric, which solves the challenge of capturing malicious actors who deliberately flit between exhibiting hostile to benign behavior. The framework would assign a low reputation score over time to such actors. Another challenge addressed is the difficulty in isolating which relay caused an anonymous communication to fail. The authors propose a filtering scheme, which will eliminate relays with the largest accumulation of attacks. Live data is collected from a Tor network, with results of the study discussed. Keywords: Anonymity, Reputation Model, Tor Network, PID controller.
  • Stavrinides, S.G.; Karagiorgos, N.F.; Papathanasiou, K.; Nikolaidis, S.; Anagnostopoulos, A.N., "A Digital Nonautonomous Chaotic Oscillator Suitable for Information Transmission," Circuits and Systems II: Express Briefs, IEEE Transactions on , vol.60, no.12, pp.887,891, Dec. 2013 (ID#:14-1367) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6654269&isnumber=6679724 In this brief, an all-digital chaotic operating electronic circuit, which is suitable for information modulation and chaotic transmission, is introduced. The chaotic oscillating circuit is a nonautonomous one, and it is designed in such a way that signals at all stages are digital ones. No analog subcircuit is involved in generating chaos. Oscillator design and experimental demonstration of its chaotic behavior are provided, together with the evaluation of the chaotic properties that it possesses, employing established nonlinear dynamics tools. Keywords: {chaos; digital integrated circuits; nonlinear dynamical systems; oscillators; all-digital chaotic circuit; chaotic oscillating circuit; chaotic properties; chaotic transmission; digital nonautonomous chaotic oscillator; information modulation; information transmission; nonlinear dynamics tools; operating electronic circuit; Chaotic communication; Digital circuits; Entropy; Oscillators; Synchronization; Time series analysis; Chaotic circuit; chaotic modulation; digital oscillators; secure communication
  • JunSeong Kim; Jongsu Yi; Ho-Hyun Park, "A case study on oscillating behavior of end-to-end network latency," Information Networking (ICOIN), 2012 International Conference on , vol., no., pp.512,516, 1-3 Feb. 2012 (ID#:14-1368) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6164430&isnumber=6164338 Understanding network latency is important for providing consistent and acceptable levels of services in network-based applications. Due to the difficulty of estimating applications' network demands and the difficulty of predicting network load, however, the management of network resources has often been ignored in network-based systems. This paper presents network traffic oscillating behavior that has been observed in real operational networks. The basic idea on the study is that a variation of network latency is strongly correlated with the past history of the latency. Four typical network traffic status are defined based on the stability and the burstiness of latencies. Observations of network latency are an open research area across multi-time scale levels and the proposed network status would be helpful to simplify issues in the area. Keywords: {telecommunication network management; telecommunication traffic; end-to-end network latency; multitime scale level; network resources management; network traffic oscillating behavior; real operational network; Delay; History; Internet; Numerical models; Predictive models; Telecommunication traffic; end-to-end network latency; network burstiness; network stability; network traffic status; time series


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Phishing

Phishing


Phishing remains a primary method for social engineering access to computers and information. Much research work has been done in this area in recent months. The 12 works cited here present research about detection, filtering, and profiling. The first paper was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • Rucha Tembe, Olga Zielinska, Yuqi Liu, Kyung Wha Hong, Emerson Murphy-Hill, Chris Mayhorn and Xi Ge. "Phishing in International Waters Exploring Cross-National Differences in Phishing Conceptualizations between Chinese, Indian and American Samples" HOT SoS 2014. (ID#:14-1340) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf This paper discusses the results of surveying one hundred-sixty four subjects from the United States, India, and China on their experiences with phishing and whether online safety practices were self-exercised. The study refuted the popular notion that there were significant similarities between these subjects in phishing attack characteristics, types of media where phishing is most prevalent, and the ramifications of phishing. Further, the study determined that age and education levied no influence on agreement between subjects on the aforementioned topics. Concluded results are discussed, such as the discovery that both Indian and Chinese participants are less likely to notice the padlocked security icon than Americans. Results from this study would be beneficial in designs for culturally-inclusive defenses against phishing. Keywords: Phishing, cultural differences, nationality, online privacy, India, China, susceptibility
  • Weibo Chu; Zhu, B.B.; Feng Xue; Xiaohong Guan; Zhongmin Cai, "Protect sensitive sites from phishing attacks using features extractable from inaccessible phishing URLs," Communications (ICC), 2013 IEEE International Conference on , vol., no., pp.1990,1994, 9-13 June 2013. (ID#:14-1342) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6654816&isnumber=6654691 Phishing is the third cyber-security threat globally and the first cyber-security threat in China. There were 61.69 million phishing victims in China alone from June 2011 to June 2012, with the total annual monetary loss more than 4.64 billion US dollars. These phishing attacks were highly concentrated in targeting at a few major Websites. Many phishing Webpages had a very short life span. In this paper, we assume the Websites to protect against phishing attacks are known, and study the effectiveness of machine learning based phishing detection using only lexical and domain features, which are available even when the phishing Webpages are inaccessible. We propose several novel highly effective features, and use the real phishing attack data against Taobao and Tencent, two main phishing targets in China, in studying the effectiveness of each feature, and each group of features. We then select an optimal set of features in our phishing detector, which has achieved a detection rate better than 98%, with a false positive rate of 0.64% or less. The detector is still effective when the distribution of phishing URLs changes. Keywords: Web sites; computer crime; feature extraction; learning (artificial intelligence); China; Taobao; Tencent; Web sites; cyber-security threat; domain features; inaccessible phishing URL; lexical features; machine learning based phishing detection; phishing Web pages; phishing attack data; sensitive site protection; Detectors; Electronic mail; Feature extraction; Google; Security; Superluminescent diodes; Web sites
  • DeBarr, D.; Ramanathan, V.; Wechsler, H., "Phishing detection using traffic behavior, spectral clustering, and random forests," Intelligence and Security Informatics (ISI), 2013 IEEE International Conference on , vol., no., pp.67,72, 4-7 June 2013. (ID#:14-1343) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6578788&isnumber=6578763 Phishing is an attempt to steal a user's identity. This is typically accomplished by sending an email message to a user, with a link directing the user to a web site used to collect personal information. Phishing detection systems typically rely on content filtering techniques, such as Latent Dirichlet Allocation (LDA), to identify phishing messages. In the case of spear phishing, however, this may be ineffective because messages from a trusted source may contain little content. In order to handle such emerging spear phishing behavior, we propose as a first step the use of Spectral Clustering to analyze messages based on traffic behavior. In particular, Spectral Clustering analyzes the links between URL substrings for web sites found in the message contents. Cluster membership is then used to construct a Random Forest classifier for phishing. Data from the Phishing Email Corpus and the Spam Assassin Email Corpus are used to evaluate this approach. Performance evaluation metrics include the Area Under the receiver operating characteristic Curve (AUC), as well as accuracy, precision, recall, and the (harmonic mean) F measure. Performance of the integrated Spectral Clustering and Random Forest approach is found to provide significant improvements in all the metrics listed, compared to a content filtering technique such as LDA coupled with text message deletion done randomly or in an adaptive fashion using adversarial learning. The Spectral Clustering approach is robust against the absence of content. In particular, we show that Spectral Clustering yields (99.8%, 97.8%) for (AUC, F measure) compared to LDA that yields (94.6%, 89.4%) and (79.6%, 57.9%) when the content of the messages is reduced to 10% of their original size using random and adversarial deletion, respectively. The difference is most striking at low False Positive (FP) rates. Keywords: Web sites; computer crime; learning (artificial intelligence);pattern classification; pattern clustering; performance evaluation; random processes; unsolicited e-mail; AUC; URL substrings; Web site; adversarial deletion; adversarial learning; area under the receiver operating characteristic curve; cluster membership; email message; false positive rates; integrated spectral clustering; message contents; performance evaluation metrics; personal information collection; phishing detection systems; phishing email corpus; random deletion; random forest classifier; spam assassin email corpus; spear phishing behavior; text message deletion; traffic behavior; trusted source; Electronic mail; Laplace equations; Training; Vegetation; Web servers; Web sites; Latent Dirichlet Allocation; Link Analysis; Phishing; Spear Phishing; Spectral Clustering
  • Hamid, I.R.A.; Abawajy, J.H., "Profiling Phishing Email Based on Clustering Approach," Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on , vol., no., pp.628,635, 16-18 July 2013. (ID#:14-1344) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680895&isnumber=6680793 In this paper, an approach for profiling email-born phishing activities is proposed. Profiling phishing activities are useful in determining the activity of an individual or a particular group of phishers. By generating profiles, phishing activities can be well understood and observed. Typically, work in the area of phishing is intended at detection of phishing emails, whereas we concentrate on profiling the phishing email. We formulate the profiling problem as a clustering problem using the various features in the phishing emails as feature vectors. Further, we generate profiles based on clustering predictions. These predictions are further utilized to generate complete profiles of these emails. The performance of the clustering algorithms at the earlier stage is crucial for the effectiveness of this model. We carried out an experimental evaluation to determine the performance of many classification algorithms by incorporating clustering approach in our model. Our proposed profiling email-born phishing algorithm (ProEP) demonstrates promising results with the RatioSize rules for selecting the optimal number of clusters. Keywords: electronic mail; pattern classification; pattern clustering; program diagnostics; unsolicited e-mail; ProEP algorithm; RatioSize rules; classification algorithms; clustering approach; clustering predictions; e-mail-borne phishing activity profiling; feature vectors; optimal cluster number selection; performance evaluation; phishing email detection; profiling e-mail-born phishing algorithm; Classification algorithms; Clustering algorithms; Computational modeling; Data models; Electronic mail; Feature extraction; Prediction algorithms; Clustering Algorithm; Phishing; Profiling
  • Shian-Shyong Tseng, Ching-Heng Ku, Ai-Chin Lu, Yuh-Jye Wang, Guang-Gang Geng. "Building a Self-Organizing Phishing Model Based upon Dynamic EMCUD" IIH-MSP '13 Proceedings of the 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing October 2013. (Pages 509-512) (ID#:14-1348) Available at: http://dl.acm.org/citation.cfm?id=2571271.2571675&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1109/IIH-MSP.2013.132 In recent years, with the rapid growth of the Internet applications and services, phishing attacks seriously threaten the web security. Due to the versatile and dynamic nature of phishing patterns, the development and maintenance of the anti-phishing prevention system is difficult and costly. Hence, how to acquire and update the phishing knowledge and the phishing model in the anti-phishing detection system become an important issue. In this study, we use the EMCUD (Extended Embedded Meaning Capturing and Uncertainty Deciding) method to build up the phishing attack knowledge according to the identification of phishing attributes. Since users have been aware of some anti-phishing methods, phishers often evolve phishing attack to gain in the environment. The phishing attack knowledge also needs to be dynamically evolved over time. How to systematically evolve the phishing knowledge becomes a major concern of this study. Hence, we use the VODKA (Variant Objects Discovering Knowledge Acquisition) method, a dynamic EMCUD, to evolve existing phishing knowledge. These methods can facilitate the acquisition of new inference rules for the phishing attack knowledge and the observation of the variation and the trend of the phishing attack. In the experiment, 1, 762 phishing URL of the APNOW (Anti-Phishing Notification Window) phishing database of Taiwan have been partitioned into 7 representative phishing cases, and 10 phishing attributes have been obtained by the VOKDA method. Finally, we successfully evolve detection rules of phishing models and observe the trend of the phishing attack model to show the feasibility of this study. Keywords: (not available)
  • M. Pandey, V. Ravi . "Phishing Detection Using PSOAANN Based One-Class Classifier" ICETET '13 Proceedings of the 2013 6th International Conference on Emerging Trends in Engineering and Technology September 2013 (Pages 148-153) (ID#:14-1349) Available at:http://dl.acm.org/citation.cfm?id=2606260.2606321&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1109/ICETET.2013.46 We propose to detect phishing emails and websites using particle swarm optimization (PSO) trained auto associative neural network (PSOAANN), which is employed as one class classifier. PSOAANN achieved better results when compared to previous efforts. In the study, we also developed a new feature selection method based on the weights from input to hidden layers of the PSOAANN. We compared its performance with other methods. Keywords: (not available)
  • Bastian Braun, Martin Johns, Johannes Koestler, Joachim Posegga. " PhishSafe: leveraging modern JavaScript API's for transparent and robust protection" Proceedings of the 4th ACM conference on Data and application security and privacy MRCH 2014. (Pages 61-72) (ID#:14-1350) Available at: http://dl.acm.org/citation.cfm?id=2557553&dl=ACM&coll=DL&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1145/2557547.2557553 The term "phishing" describes a class of social engineering attacks on authentication systems, that aim to steal the victim's authentication credential, e.g., the username and password. The severity of phishing is recognized since the mid-1990's and a considerable amount of attention has been devoted to the topic. However, currently deployed or proposed countermeasures are either incomplete, cumbersome for the user, or incompatible with standard browser technology. In this paper, we show how modern JavaScript API's can be utilized to build PhishSafe, a robust authentication scheme, that is immune against phishing attacks, easily deployable using the current browser generation, and requires little change in the end-user's interaction with the application. We evaluate the implementation and find that it is applicable to web applications with low efforts and causes no tangible overhead. Keywords: (not available)
  • Philippe De Ryck, Nick Nikiforakis, Lieven Desmet, Wouter Joosen. "TabShots: client-side detection of tabnabbing attacks" Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security May 2013 (Pages 447-456) (ID#:14-1351) Available at: http://dl.acm.org/citation.cfm?id=2484313.2484371&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1145/2484313.2484371 As the web grows larger and larger and as the browser becomes the vehicle-of-choice for delivering many applications of daily use, the security and privacy of web users is under constant attack. Phishing is as prevalent as ever, with anti-phishing communities reporting thousands of new phishing campaigns each month. In 2010, tabnabbing, a variation of phishing, was introduced. In a tabnabbing attack, an innocuous-looking page, opened in a browser tab, disguises itself as the login page of a popular web application, when the user's focus is on a different tab. The attack exploits the trust of users for already opened pages and the user habit of long-lived browser tabs. To combat this recent attack, we propose TabShots. TabShots is a browser extension that helps browsers and users to remember what each tab looked like, before the user changed tabs. Our system compares the appearance of each tab and highlights the parts that were changed, allowing the user to distinguish between legitimate changes and malicious masquerading. Using an experimental evaluation on the most popular sites of the Internet, we show that TabShots has no impact on 78% of these sites, and very little on another 19%. Thereby, TabShots effectively protects users against tabnabbing attacks without affecting their browsing habits and without breaking legitimate popular sites. Keywords: management of computing and information systems; Security and Protection; Invasive software (e.g., viruses, worms, Trojan horses); information storage and retrieval ; On-line Information Services; Computing Milieux; Authentication
  • Le Xu, Li Li, Vijayakrishnan Nagarajan, Dijiang Huang, Wei-Tek Tsai. "Secure Web Referral Services for Mobile Cloud Computing" SOSE '13 Proceedings of the 2013 IEEE Seventh International Symposium on Service-Oriented System Engineering March 2013 (Pages 584-593) (ID#:14-1352) Available at: http://dl.acm.org/citation.cfm?id=2497618.2497627&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1109/SOSE.2013.94 Security has become a major concern for mobile devices when mobile users browsing malicious websites. Existed security solutions may rely on human factors to achieve a good result against phishing websites and SSL Strip-based Man-In-The-Middle (MITM) attack. This paper presents a secure web referral service, which is called Secure Search Engine (SSE) for mobile devices. The system uses mobile cloud-based virtual computing and provides each user a Virtual Machine (VM) as a personal security proxy where all Web traffics are redirected through it. Within the VM, the SSE uses web crawling technology with a set of checking services to validate IP addresses and certificate chains. A Phishing Filter is also used to check given URLs with an optimized execution time. The system also uses private and anonymously shared caches to protect user privacy and improve performance. The evaluation results show that SSE is non-intrusive and consumes no power or computation on the client device, while producing less false positive and false negative than existing web browser-based anti-phishing solutions. Keywords: (not available)
  • Mark Scanlon, M-Tahar Kechadi. "Universal Peer-to-Peer Network Investigation Framework" ARES '13 Proceedings of the 2013 International Conference on Availability, Reliability and Security September 2013 (Pages 694-700). (ID#:14-1353) Available at: http://dl.acm.org/citation.cfm?id=2545118.2545137&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.1109/ARES.2013.91 Peer-to-Peer (P2P) networking has fast become a useful technological advancement for a vast range of cyber criminal activities. Cyber crimes from copyright infringement and spamming, to serious, high financial impact crimes, such as fraud, distributed denial of service attacks (DDoS) and phishing can all be aided by applications and systems based on the technology. The requirement for investigating P2P based systems is not limited to the more well-known cyber crimes listed above, as many more legitimate P2P based applications may also be pertinent to a digital forensic investigation, e.g., VoIP and instant messaging communications, etc. Investigating these networks has become increasingly difficult due to the broad range of network topologies and the ever increasing and evolving range of P2P based applications. This paper introduces the Universal Peer-to-Peer Network Investigation Framework (UP2PNIF), a framework which enables significantly faster and less labor intensive investigation of newly discovered P2P networks through the exploitation of the commonalities in network functionality. In combination with a reference database of known network protocols and characteristics, it is envisioned that any known P2P network can be instantly investigated using the framework. The framework can intelligently determine the best methodology dependent on the focus of the investigation resulting in a significantly expedited evidence gathering process. Keywords: (not available)
  • Chen, Zhen; Han, Fuye; Cao, Junwei; Jiang, Xin; Chen, Shuo, "Cloud computing-based forensic analysis for collaborative network security management system," Tsinghua Science and Technology , vol.18, no.1, pp.40,50, Feb. 2013. (ID#:14-1354) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6449406&isnumber=6449400 Internet security problems remain a major challenge with many security concerns such as Internet worms, spam, and phishing attacks. Botnets, well-organized distributed network attacks, consist of a large number of bots that generate huge volumes of spam or launch Distributed Denial of Service (DDoS) attacks on victim hosts. New emerging botnet attacks degrade the status of Internet security further. To address these problems, a practical collaborative network security management system is proposed with an effective collaborative Unified Threat Management (UTM) and traffic probers. A distributed security overlay network with a centralized security center leverages a peer-to-peer communication protocol used in the UTMs collaborative module and connects them virtually to exchange network events and security rules. Security functions for the UTM are retrofitted to share security rules. In this paper, we propose a design and implementation of a cloud-based security center for network security forensic analysis. We propose using cloud storage to keep collected traffic data and then processing it with cloud computing platforms to find the malicious attacks. As a practical example, phishing attack forensic analysis is presented and the required computing and storage resources are evaluated based on real trace data. The cloud-based security center can instruct each collaborative UTM and prober to collect events and raw traffic, send them back for deep analysis, and generate new security rules. These new security rules are enforced by collaborative UTM and the feedback events of such rules are returned to the security center. By this type of close-loop control, the collaborative network security management system can identify and address new distributed attacks more quickly and effectively. Keywords: Cloud computing; Collaboration; Collaborative work; Computer crime; Computer security; Digital forensics; Forensics; Network security; Web and internet services; amazon web service; anti-botnet; anti-phishing; cloud computing; collaborative network security system; computer forensics; eucalyptus; hadoop file system; overlay network
  • Min-Sheng Lin; Chien-Yi Chiu; Yuh-Jye Lee; Hsing-Kuo Pao, "Malicious URL filtering -- A big data application," Big Data, 2013 IEEE International Conference on , vol., no., pp.589,596, 6-9 Oct. 2013. (ID#:14-1355) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6691627&isnumber=6690588 Malicious URLs have become a channel for Internet criminal activities such as drive-by-download, spamming and phishing. Applications for the detection of malicious URLs are accurate but slow (because they need to download the content or query some Internet host information). In this paper we present a novel lightweight filter based only on the URL string itself to use before existing processing methods. We run experiments on a large dataset and demonstrate a 75% reduction in workload size while retaining at least 90% of malicious URLs. Existing methods do not scale well with the hundreds of millions of URLs encountered every day as the problem is a heavily-imbalanced, large-scale binary classification problem. Our proposed method is able to handle nearly two million URLs in less than five minutes. We generate two filtering models by using lexical features and descriptive features, and then combine the filtering results. The on-line learning algorithms are applied here not only for dealing with large-scale data sets but also for fitting the very short lifetime characteristics of malicious URLs. Our filter can significantly reduce the volume of URL queries on which further analysis needs to be performed, saving both computing time and bandwidth used for content retrieval. Keywords: Internet; computer crime; learning (artificial intelligence);pattern classification; query processing; Internet criminal activities; URL queries; URL string; big data application; content retrieval; drive-by-download ;heavily-imbalanced large-scale binary classification problem; lifetime characteristics; lightweight filter; malicious URL filtering; on-line learning algorithms ;phishing; spamming; Dictionaries; Feature extraction; IP networks; Prediction algorithms; Predictive models; Training; Web sites; Data Mining; Information Filtering; Information Security; Machine learning
  • Smitha, A.; Manohara Pai, M.M.; Ajam, N.; Mouzna, J., "An optimized adaptive algorithm for authentication of safety critical messages in VANET," Communications and Networking in China (CHINACOM), 2013 8th International ICST Conference on , vol., no., pp.149,154, 14-16 Aug. 2013. (ID#:14-1356) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6694582&isnumber=6694549 Authentication is one of the essential frameworks to ensure safe and secure message dissemination in Vehicular Adhoc Networks (VANETs). But an optimized authentication algorithm with reduced computational overhead is still a challenge. In this paper, we propose a novel classification of safety critical messages and provide an adaptive algorithm for authentication in VANETs using the concept of Merkle tree and Elliptic Curve Digital Signature Algorithm (ECDSA). Here, the Merkle tree is constructed to store the hashed values of public keys at the leaf nodes. This algorithm addresses Denial of Service (DoS) attack, man in the middle attack and phishing attack. Experimental results show that, the algorithm reduces the computational delay by 20 percent compared to existing schemes. Keywords: digital signatures; information dissemination; pattern classification; public key cryptography; telecommunication security; tree data structures; vehicular ad hoc networks; ECDSA; Merkle tree; VANET; computational delay reduction; computational overhead reduction; denial-of-service attack; elliptic curve digital signature algorithm; leaf nodes; man-in-the-middle attack; optimized adaptive algorithm; phishing attack; public keys; safe message dissemination; safety critical message authentication; safety critical message classification; secure message dissemination; vehicular adhoc networks; Authentication; Computer crime; Public key; Receivers; Safety; Vehicles; Vehicular ad hoc networks; DoS attack; ECDSA; Entity Authentication; Merkle tree; Non repudiation
  • Alarifi, A.; Alsaleh, M.; Al-Salman, A.-M., "Security analysis of top visited Arabic Web sites," Advanced Communication Technology (ICACT), 2013 15th International Conference on , vol., no., pp.173,178, 27-30 Jan. 2013. (ID#:14-1358) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488165&isnumber=6488107 The richness and effectiveness of client-side vulnerabilities contributed to an accelerated shift toward client-side Web attacks. In order to understand the volume and nature of such malicious Web pages, we perform a detailed analysis of a subset of top visited Web sites using Google Trends. Our study is limited to the Arabic content in the Web and thus only the top Arabic searching terms are considered. To carry out this study, we analyze more than 7,000 distinct domain names by traversing all the visible pages within each domain. To identify different types of suspected phishing and malware pages, we use the API of Sucuri SiteCheck, McAfee SiteAdvisor, Google Safe Browsing, Norton, and AVG website scanners. The study shows the existence of malicious contents across a variety of types of Web pages. The results indicate that a significant number of these sites carry some known malware, are in a blacklisting status, or have some out-of-date software. Throughout our analysis, we characterize the impact of the detected malware families and speculate as to how the reported positive Web servers got infected. Keywords: web sites; client-server systems ;information retrieval; invasive software; API; AVG Website scanner; Arabic content; Arabic searching terms; Google Safe Browsing; Google Trends; McAfee SiteAdvisor; Norton; Sucuri SiteCheck; blacklisting status; client-side Web attack; client-side vulnerability; distinct domain name; infection; malicious Web page; malicious content; malware family detection; malware page; out-of-date software; positive Web server; security analysis; suspected phishing page ;t op visited Arabic Web sites; visible page; Malicious links; Malware ;Search engine spam; Web spam; Web vulnerabilities

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Quantum Computing (Update)

Quantum Computing (Update)


While quantum computing is still in its early stage of development, large-scale quantum computers promise to be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms. Quantum algorithms, such as Simon's algorithm, run faster than any possible probabilistic classical algorithm. For the Science of Security, the speed, capacity, and flexibility of qubits over digital processing offers still greater promise. They are a hard problem of interest to cryptography. The research work presented here was published in the first half of 2014. Elements discussed include cryptography, proxy signatures, key distribution, reversible logic, and cloud computing,

  • Kirsten Eisentrager, Sean Hallgren, Alexei Kitaev, Fang Song, "A Quantum Algorithm For Computing The Unit Group Of An Arbitrary Degree Number Field," STOC '14 Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 293-302. (ID#:14-1733) URL: http://dl.acm.org/citation.cfm?id=2591796.2591860&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 Computing the group of units in a field of algebraic numbers is one of the central tasks of computational algebraic number theory. It is believed to be hard classically, which is of interest for cryptography. In the quantum setting, efficient algorithms were previously known for fields of constant degree. We give a quantum algorithm that is polynomial in the degree of the field and the logarithm of its discriminant. This is achieved by combining three new results. The first is a classical algorithm for computing a basis for certain ideal lattices with doubly exponentially large generators. The second shows that a Gaussian-weighted superposition of lattice points, with an appropriate encoding, can be used to provide a unique representation of a real-valued lattice. The third is an extension of the hidden subgroup problem to continuous groups and a quantum algorithm for solving the HSP over the group Rn. Keywords: computational algebraic number theory, quantum algorithms, unit group, cryptography
  • Walter O. Krawec, "Using Evolutionary Techniques To Analyze The Security Of Quantum Key Distribution Protocols," GECCO Comp '14 Proceedings of the 2014 Conference Companion On Genetic And Evolutionary Computation Companion, July 2014, Pages 171-172. (ID#:14-1734) URL: http://dl.acm.org/citation.cfm?id=2598394.2598410&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 In this paper, we describe a new real coded GA which may be used to analyze the security of quantum key distribution (QKD) protocols by estimating the maximally tolerated error rate - an important statistic and, for many newer more complicated protocols, still unknown. Our algorithm takes advantage of several nice features of QKD protocols to simplify the search process and was evaluated on several protocols and can even detect security flaws in a protocol thus showing our algorithm's usefulness in protocol design. Keywords: quantum computing, quantum key distribution
  • Singh, H.; Sachdev, A, "The Quantum Way of Cloud Computing," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.397,400, 6-8 Feb. 2014. (ID#:14-1735) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798362&isnumber=6798279 Quantum Computing and Cloud Computing are the technologies which have the capability to shape the future of computing. Quantum computing focuses on creating super-fast computers using the concepts of quantum physics whereas Cloud computing allows the computing power to be provided as a service. This paper presents a theoretical approach towards the possibility of a Quantum-Cloud i.e. quantum computing as a service. This will combine the fields of quantum computing and cloud computing, resulting into an evolutionary technology. Also, this paper discusses the possible advantages of this in the near future. Keywords: cloud computing; quantum computing; cloud computing; quantum computing; super-fast computers; Cryptography; Hardware; Quantum computing; Cloud Computing; Quantum Cloud; Quantum Computing; Qubit
  • Ali Javadi Abhari, Shruti Patil, Daniel Kudrow, Jeff Heckey, Alexey Lvov, Frederic T. Chong, Margaret Martonosi, "ScaffCC: a Framework For Compilation And Analysis Of Quantum Computing Programs," CF '14 Proceedings of the 11th ACM Conference on Computing Frontiers, May 2014, Article No. 1. (ID#:14-1736) URL: http://dl.acm.org/citation.cfm?id=2597917.2597939&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 Quantum computing is a promising technology for high-performance computation, but requires mature tool flows that can map large-scale quantum programs onto targeted hardware. In this paper, we present a scalable compiler for large-scale quantum applications, and show the opportunities for reducing compilation and analysis time, as well as output code size. We discuss the similarities and differences between compiling for a quantum computer as opposed to a classical computer, and present a state-of-the-art approach for compilation of classical circuits into quantum circuits. Our work also highlights the importance of high-level quantum compilation for logical circuit translation, quantitative analysis of algorithms, and optimization of circuit lengths. Keywords: compilers, quantum computation, reversible logic
  • Sarker, Ankur; M.Shamiul Amin; Bose, Avishek; Islam, Nafisah, "An optimized design of binary comparator circuit in quantum computing," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,5, 23-24 May 2014. (ID#:14-1737) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850768&isnumber=6850678 Reversible logic; transforms logic signal in a way that allows the original input signals to be recovered from the produced outputs, has attracted great attention because of its application in diverse areas such as quantum computing, low power computing, nanotechnology, DNA computing, quantum dot cellular automata, optical computing. In this paper, we design low power binary comparators using reversible logic gates. Firstly, single bit binary reversible comparator circuits are designed using different reversible gates along with proposed gate named Newly Proposed Gate. Then, these procedures are generalized for constructing binary n-bit reversible comparator circuit. The design synthesis consists of two parts: Comparator Cell and Propagator Cell. An algorithm, based on our proposed design, shows that proposed circuit reduces overall cost and it outperforms than existing sequential comparator circuits. Also, comparing with existing tree-based comparator circuit, proposed design reduces quantum cost, garbage output and gate count in a significance level which means better improvement as cost of any quantum circuit is directly associated with quantum cost, garbage output and gate count. Keywords: Algorithm design and analysis; Clocks; Conferences; Informatics; Logic gates; Quantum computing; Vectors; binary comparator; low power computing; quantum computing; reversible logic gates
  • Rigui Zhou, Jian Cao, "Quantum Novel Genetic Algorithm Based On Parallel Subpopulation Computing And Its Application," Artificial Intelligence Review, Volume 41 Issue 3, March 2014, Pages 359-371. (ID#:14-1738) URL: http://dl.acm.org/citation.cfm?id=2580951.2580962&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 The authors of this paper present an original quantum genetic algorithm, based on subpopulation parallel computing, which aims to improve genetic computing methods by changes to quantum coding and rotation angle. This paper compares the new algorithm with its traditional counterpart, with details and resulting discovery discussed. Keywords: Quantum genetic algorithm, Space division, Subpopulation parallel computing
  • Cody Jones, "Distillation Protocols For Fourier States In Quantum Computing," Quantum Information & Computation, Volume 14 Issue 7&8, May 2014, Pages 560-576. (ID#:14-1739) URL:http://dl.acm.org/citation.cfm?id=2638682.2638684&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 This paper details a proposed lowest-overhead method for Fourier states, by using distillation protocols for constructing the fundamental, n-qubit Fourier state. Protocol analysis, using methods from digital signal processing, are discussed. Keywords: QEC, quantum computation
  • A. Manju, M. J. Nigam, "Applications of Quantum Inspired Computational Intelligence: A Survey," Artificial Intelligence Review, Volume 42 Issue 1, June 2014, Pages 79-156. (ID#:14-1740) URL: http://dl.acm.org/citation.cfm?id=2629835.2629882&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 This paper surveys numerous applications of Quantum inspired computational intelligence (QCI) techniques, discussing challenges and obstacles, with a view to help researchers understand QCI as a problem-solving application. Keywords: Computational intelligence, Quantum computing, Quantum mechanics
  • Andris Ambainis, Ronald Wolf, "How Low Can Approximate Degree and Quantum Query Complexity be for Total Boolean Functions?," Computational Complexity, Volume 23 Issue 2, June 2014, Pages 305-322. (ID#:14-1741) URL: http://dl.acm.org/citation.cfm?id=2630022.2630056&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 This paper discusses "approximate degree" and "bounded-error" quantum query complexity, upon both of which Boolean functions depend. Keywords: 06E30, 41A10, 68Q12, 68Q17, Boolean functions, Quantum computing, computational complexity, polynomial approximations, quantum algorithms
  • Walter O. Krawec, "An Algorithm For Evolving Multiple Quantum Operators For Arbitrary Quantum Computational Problems," GECCO Comp '14 Proceedings of the 2014 Conference Companion On Genetic And Evolutionary Computation Companion, July 2014, Pages 59-60. (ID#:14-1742) URL: http://dl.acm.org/citation.cfm?id=2598394.2598408&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 We design and analyze a real-coded genetic algorithm for the use in evolving collections of quantum unitary operators (not circuits) which act on pure or mixed states over arbitrary quantum systems while interacting with fixed, problem specific operators (e.g., oracle calls) and intermediate partial measurements. Our algorithm is general enough so as to allow its application to multiple, very different, areas of quantum computation research. Keywords: quantum algorithms, quantum computing, real coded genetic algorithm
  • Siddhartha Bhattacharyya, Pankaj Pal, Sandip Bhowmik, "A Quantum Multilayer Self Organizing Neural Network for Object Extraction from a Noisy Background," CSNT '14 Proceedings of the 2014 Fourth International Conference on Communication Systems and Network Technologies, April 2014, Pages 512-517. (ID#:14-1743) URL: http://dl.acm.org/citation.cfm?id=2624304.2624898&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 Proper extraction of objects from a noisy perspective is an upheaval task in the computer vision research community. Several intelligent research paradigms have been focused on this aspect over the years. Notable among them is the multilayer self organizing neural network (MLSONN) architecture assisted by fuzzy measure guided back propagation of errors. In this article, we propose a quantum version of the MLSONN architecture which operates using single qubit rotation gates. The proposed QMLSONN architecture comprises three processing layers viz., input, hidden and output layers. The nodes of the processing layers are represented by qubits and the interconnection weights are represented by quantum gates. A quantum measurement at the output layer destroys the quantum states of the processed information thereby inducing incorporation of linear indices of fuzziness as the network system errors used to adjust network interconnection weights through a proposed quantum back propagation algorithm. Results of application of the QMLSONN are demonstrated on a synthetic and a real life spanner image with various degrees of Gaussian noise. A comparative study with the performance of the classical MLSONN architecture reveals the time efficiency of the proposed QMLSONN architecture. Keywords: Object extraction, Multilayer Self Organizing Neural Network, Quantum Computing, Quantum Multilayer Self Organizing Neural Network
  • Michael Elkin, Hartmut Klauck, Danupon Nanongkai, Gopal Pandurangan, "Can Quantum Communication Speed Up Distributed Computation?," PODC '14 Proceedings of the 2014 ACM Symposium On Principles Of Distributed Computing, July 2014, Pages 166-175. (ID#:14-1744) URL: http://dl.acm.org/citation.cfm?id=2611462.2611488&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 The focus of this paper is on quantum distributed computation, where we investigate whether quantum communication can help in speeding up distributed network algorithms. Our main result is that for certain fundamental network problems such as minimum spanning tree, minimum cut, and shortest paths, quantum communication does not help in substantially speeding up distributed algorithms for these problems compared to the classical setting. In order to obtain this result, we extend the technique of Das Sarma et al. [SICOMP 2012] to obtain a uniform approach to prove non-trivial lower bounds for quantum distributed algorithms for several graph optimization (both exact and approximate versions) as well as verification problems, some of which are new even in the classical setting, e.g. tight randomized lower bounds for Hamiltonian cycle and spanning tree verification, answering an open problem of Das Sarma et al., and a lower bound in terms of the weight aspect ratio, matching the upper bounds of Elkin [STOC 2004]. Our approach introduces the Server model and Quantum Simulation Theorem which together provide a connection between distributed algorithms and communication complexity. The Server model is the standard two-party communication complexity model augmented with additional power; yet, most of the hardness in the two-party model is carried over to this new model. The Quantum Simulation Theorem carries this hardness further to quantum distributed computing. Our techniques, except the proof of the hardness in the Server model, require very little knowledge in quantum computing, and this can help overcoming a usual impediment in proving bounds on quantum distributed algorithms. In particular, if one can prove a lower bound for distributed algorithms for a certain problem using the technique of Das Sarma et al., it is likely that such lower bound can be extended to the quantum setting using tools provided in this paper and without the need of knowledge in quantum computing. Keywords: congest model, distributed computing, graph algorithms, lower bound, quantum communication, time complexity
  • Shaohua Tang, Lingling Xu, "Towards Provably Secure Proxy Signature Scheme Based On Isomorphisms of Polynomials," Future Generation Computer Systems, Volume 30, January, 2014, Pages 91-97. (ID#:14-1745) URL: http://dl.acm.org/citation.cfm?id=2562354.2562819&coll=DL&dl=GUIDE&CFID=390360820&CFTOKEN=56962601 This paper proposes a proxy signature scheme based on the Isomorphism of Polynomials (IP) challenge, under the umbrella of Multivariate Public Key Cryptography (MPKC). This signature scheme would ideally be able to resist projected quantum computing attacks, a particularly constructive gain in understanding provable security for MPKCs. Keywords: Isomorphism of Polynomials, Multivariate Public Key Cryptography, Post-Quantum Cryptography, Provable security, Proxy signature
  • Alshammari, Hamoud; Elleithy, Khaled; Almgren, Khaled; Albelwi, Saleh, "Group signature entanglement in e-voting system," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,4, 2-2 May 2014. (ID#:14-1746) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845186&isnumber=6845183 In any security system, there are many security issues that are related to either the sender or the receiver of the message. Quantum computing has proven to be a plausible approach to solving many security issues such as eavesdropping, replay attack and man-in-the-middle attack. In the e-voting system, one of these issues has been solved, namely, the integrity of the data (ballot). In this paper, we propose a scheme that solves the problem of repudiation that could occur when the voter denies the value of the ballot either for cheating purposes or for a real change in the value by a third party. By using an entanglement concept between two parties randomly, the person who is going to verify the ballots will create the entangled state and keep it in a database to use it in the future for the purpose of the non-repudiation of any of these two voters. Keywords: Authentication; Electronic voting; Protocols; Quantum computing; Quantum entanglement; Receivers; E-voting System; Entangled State; Entanglement; Quantum Computing; Qubit
  • Bennett, C.H.; Devetak, I; Harrow, AW.; Shor, P.W.; Winter, A, "The Quantum Reverse Shannon Theorem and Resource Tradeoffs for Simulating Quantum Channels," Information Theory, IEEE Transactions on , vol.60, no.5, pp.2926,2959, May 2014. (ID#:14-1747) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6757002&isnumber=6800061 Dual to the usual noisy channel coding problem, where a noisy (classical or quantum) channel is used to simulate a noiseless one, reverse Shannon theorems concern the use of noiseless channels to simulate noisy ones, and more generally the use of one noisy channel to simulate another. For channels of nonzero capacity, this simulation is always possible, but for it to be efficient, auxiliary resources of the proper kind and amount are generally required. In the classical case, shared randomness between sender and receiver is a sufficient auxiliary resource, regardless of the nature of the source, but in the quantum case, the requisite auxiliary resources for efficient simulation depend on both the channel being simulated, and the source from which the channel inputs are coming. For tensor power sources (the quantum generalization of classical memoryless sources), entanglement in the form of standard ebits (maximally entangled pairs of qubits) is sufficient, but for general sources, which may be arbitrarily correlated or entangled across channel inputs, additional resources, such as entanglement-embezzling states or backward communication, are generally needed. Combining existing and new results, we establish the amounts of communication and auxiliary resources needed in both the classical and quantum cases, the tradeoffs among them, and the loss of simulation efficiency when auxiliary resources are absent or insufficient. In particular, we find a new single-letter expression for the excess forward communication cost of coherent feedback simulations of quantum channels (i.e., simulations in which the sender retains what would escape into the environment in an ordinary simulation), on nontensor-power sources in the presence of unlimited ebits but no other auxiliary resource. Our results on tensor power sources establish a strong converse to the entanglement-assisted capacity theorem. Keywords: channel capacity; channel coding; quantum communication; quantum entanglement ;auxiliary resources; backward communication; channel capacity; channel inputs; coherent feedback simulations; communication resources; entanglement-assisted capacity theorem; entanglement-embezzling states; forward communication cost; information theory; memoryless source quantum generalization; noiseless channels; noisy channel coding problem; nontensor-power sources;quantum channel simulation; quantum reverse Shannon theorem; resource tradeoffs; standard ebits; tensor power sources; Channel capacity; Channel coding; Noise measurement; Quantum entanglement; Receivers; Standards; Quantum computing; channel capacity ;information theory; quantum entanglement; rate-distortion
  • Qawaqneh, Zakariya; Elleithy, Khaled; Alotaibi, Bandar; Alotaibi, Munif, "A new hardware quantum-based encryption algorithm," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. (ID#:14-1748) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845201&isnumber=6845183 Cryptography is entering a new age since the first steps that have been made towards quantum computing, which also poses a threat to the classical cryptosystem in general. In this paper, we introduce a new novel encryption technique and algorithm to improve quantum cryptography. The aim of the suggested scheme is to generate a digital signature in quantum computing. An arbitrated digital signature is introduced instead of the directed digital signature to avoid the denial of sending the message from the sender and pretending that the sender's private key was stolen or lost and the signature has been forged. The onetime pad operation that most quantum cryptography algorithms that have been proposed in the past is avoided to decrease the possibility of the channel eavesdropping. The presented algorithm in this paper uses quantum gates to do the encryption and decryption processes. In addition, new quantum gates are introduced, analyzed, and investigated in the encryption and decryption processes. The authors believe the gates that are used in the proposed algorithm improve the security for both classical and quantum computing. (Against)The proposed gates in the paper have plausible properties that position them as suitable candidates for encryption and decryption processes in quantum cryptography. To demonstrate the security features of the algorithm, it was simulated using MATLAB simulator, in particular through the Quack Quantum Library. Keywords: Encryption; Logic gates; Protocols; Quantum computing; Quantum mechanics; algorithms; quantum; quantum cryptography; qubit key; secure communications
  • Alshowkan, Muneer; Elleithy, Khaled, "Authenticated multiparty secret key sharing using quantum entanglement swapping," American Society for Engineering Education (ASEE Zone 1), 2014 Zone 1 Conference of the , vol., no., pp.1,6, 3-5 April 2014. (ID#:14-1749) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820637&isnumber=6820618 In this paper we propose a new protocol for multiparty secret key sharing by using quantum entanglement swapping. Quantum Entanglement swapping is a process that allows two non-interacting quantum systems to be entangled. Further, to increase the security level and to make sure that the users are legitimate, authentication for both parties will be required by a trusted third party. In this protocol, a trusted third party will authenticate the sender and the receiver and help them forming a secret key. Furthermore, the proposed protocol will perform entanglement swapping between the sender and the receiver. The result from the entanglement swapping will be an Einstein-Podolsky-Rosen (EPR) pair that will help them in forming and sending the secret key without having the sender to send any physical quantum states to the receiver. This protocol will provide the required authentication of all parties to the trusted party and it will provide the required secure method in transmitting the secret key. Keywords: Authentication; Logic gates; Protocols; Quantum computing; Quantum entanglement; Receivers; Teleportation; EPR; cryptography; entanglement; multiparty; quantum swapping
  • Ashikhmin, A, "Fidelity Lower Bounds for Stabilizer and CSS Quantum Codes," Information Theory, IEEE Transactions on , vol.60, no.6, pp.3104,3116, June 2014. (ID#:14-1750) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6763069&isnumber=6816018 In this paper, we estimate the fidelity of stabilizer and CSS codes. First, we derive a lower bound on the fidelity of a stabilizer code via its quantum enumerator. Next, we find the average quantum enumerators of the ensembles of finite length stabilizer and CSS codes. We use the average quantum enumerators for obtaining lower bounds on the average fidelity of these ensembles. We further improve the fidelity bounds by estimating the quantum enumerators of expurgated ensembles of stabilizer and CSS codes. Finally, we derive fidelity bounds in the asymptotic regime when the code length tends to infinity. These results tell us which code rate we can afford for achieving a target fidelity with codes of a given length. The results also show that in symmetric depolarizing channel a typical stabilizer code has better performance, in terms of fidelity and code rate, compared with a typical CSS codes, and that balanced CSS codes significantly outperform other CSS codes. Asymptotic results demonstrate that CSS codes have a fundamental performance loss compared with stabilizer codes. Keywords: Arrays; Cascading style sheets ;Linear codes; Quantum computing; Quantum mechanics; Standards; Vectors; CSS codes; fidelity bounds; quantum codes; stabilizer codes
  • Elmannai, Wafa; Elleithy, Khaled; Pande, Varun; Geddeda, Elham, "Quantum security using property of a quantum wave function," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. (ID#:14-1751) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845217&isnumber=6845183 Security over communication is totally essential especially for critical applications such as Military, Education and Financial applications. Unfortunately, many security mechanisms can be broken with new developments in classical computers. Quantum computers are capable of performing high speed simultaneous computations rather than sequentially as the case in classical computers do. In order to break a symmetric encryption algorithm, the attacker needs to find all the potential key combinations to get the right one. In this paper, we are introducing a secure protocol that considers the Qubit as a wave function. The beauty of the work is that the proposed protocol is based on time stamp that can only be broken by the correct set of time value, wave function, Qubit position, and other attributes such as: velocity, and phase of the Qubit. Moreover, using quantum tunneling makes the proposed protocol stronger by providing a very strong secure password protection mechanism using just one Qubit. Keywords: Computer science; Computers; Educational institutions; Protocols; Quantum computing; Security; Wave functions; Quantum Security; Quantum Wave Function; Qubit position; quantum tunneling
  • Tillich, J.-P.; Zemor, G., "Quantum LDPC Codes With Positive Rate and Minimum Distance Proportional to the Square Root of the Blocklength," Information Theory, IEEE Transactions on , vol.60, no.2, pp.1193,1202, Feb. 2014. (ID#:14-1752) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671468&isnumber=6714461 The current best asymptotic lower bound on the minimum distance of quantum LDPC codes with a fixed non-zero rate is logarithmic in the blocklength. We propose a construction of quantum LDPC codes with fixed non-zero rate and prove that the minimum distance grows proportionally to the square root of the blocklength. Keywords: parity check codes; quantum communication; asymptotic lower bound; fixed nonzero rate; quantum LDPC codes; Cascading style sheets; Decoding; Parity check codes; Quantum computing; Quantum mechanics; Sparse matrices; Vectors; CSS codes; LDPC codes; quantum codes
  • Mogos, Gabriela, "Cubic Quantum Security," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on , vol.2, no., pp.249,252, 10-13 March 2014. (ID#:14-1753) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822341&isnumber=6822285 Quantum cryptography provides us new methods of securing the communications. As compared to classical cryptography, which implies different mathematical algorithms to secure the information, quantum cryptography focuses on the physical support of the information. The process of information transmission or stocking is realized by the intermediation of a physical support, for example the photons transmitted by optical fiber or the electrons from the electricity. Communication security can be regarded as securing the physical support of the carrier of the information - in our case the photons from the optical fiber. Consequently, how and what an attacker can find out depends exclusively on the laws of physics. The paper presents a symmetrical encryption method based on the method of mixture used by the rubik's cube. Keywords: Encryption; Photonics; Physics; Protocols; Quantum computing; quantum cryptography; qubits; symmetric cryptography
  • von Maurich, I; Guneysu, T., "Lightweight code-based cryptography: QC-MDPC McEliece encryption on reconfigurable devices," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014. (ID#:14-1754) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800252&isnumber=6800201 With the break of RSA and ECC cryptosystems in an era of quantum computing, asymmetric code-based cryptography is an established alternative that can be a potential replacement. A major drawback are large keys in the range between 50kByte to several MByte that prevented real-world applications of code-based cryptosystems so far. A recent proposal by Misoczki et al. showed that quasi-cyclic moderate density parity-check (QC-MDPC) codes can be used in McEliece encryption - reducing the public key to just 0.6 kByte to achieve a 80-bit security level. Despite of reasonably small key sizes that could also enable small designs, previous work only report highperformance implementations with high resource consumptions of more than 13,000 slices on a large Xilinx Virtex-6 FPGA for a combined en-/decryption unit. In this work we focus on lightweight implementations of code-based cryptography and demonstrate that McEliece encryption using QC-MDPC codes can be implemented with a significantly smaller resource footprint - still achieving reasonable performance sufficient for many applications, e.g., challenge-response protocols or hybrid firmware encryption. More precisely, our design requires just 68 slices for the encryption and around 150 slices for the decryption unit and is able to en-/decrypt an input block in 2.2ms and 13.4 ms, respectively. Keywords: cyclic codes; field programmable gate arrays; parity check codes; public key cryptography; quantum computing; reconfigurable architectures; ECC cryptosystems; QC-MDPC McEliece encryption; QC-MDPC codes; RSA cryptosystems; Xilinx Virtex-6 FPGA; combined encryption-decryption unit; lightweight code-based cryptography; quantum computing; quasicyclic moderate density parity-check codes; reconfigurable devices; resource consumption; resource footprint; security level; word length 80 bit; Decoding; Elliptic curve cryptography; Encryption; Field programmable gate arrays; Generators; Vectors
  • Lobino, M.; Laing, A; Pei Zhang; Aungskunsiri, K.; Martin-Lopez, E.; Wabnig, J.; Nock, R.W.; Munns, J.; Bonneau, D.; Pisu Jiang; Hong Wei Li; Rarity, J.G.; Niskanen, AO.; Thompson, M.G.; O'Brien, J.L., "Quantum key distribution with integrated optics," Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific , vol., no., pp.795,799, 20-23 Jan. 2014. (ID#:14-1755) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742987&isnumber=6742831 We report on a quantum key distribution (QKD) experiment where a client with an on-chip polarisation rotator can access a server through a telecom-fibre link. Large resources such as photon source and detectors are situated at server-side. We employ a reference frame independent QKD protocol for polarisation qubits and show that it overcomes detrimental effects of drifting fibre birefringence in a polarisation maintaining fibre. Keywords: birefringence; integrated optics; optical fibre communication; optical fibre polarisation; optical rotation; quantum communication; quantum computing; quantum cryptography; QKD; detectors; fibre birefringence; integrated optics; on-chip polarisation rotator; photon source; polarisation maintaining fibre; polarisation qubits; quantum key distribution; telecom-fibre link; Detectors; Educational institutions; Electronic mail; Noise; Photonics; Protocols; Servers
  • de Jesus Lopes Soares, E.; Alencar Mendonca, F.; Viana Ramos, R., "Quantum Random Number Generator Using Only One Single-Photon Detector," Photonics Technology Letters, IEEE , vol.26, no.9, pp.851,853, May1, 2014. (ID#:14-1756) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6729066&isnumber=6783816 Quantum random number generators (QRNGs) have important applications in cryptographic protocols, games, and lotteries, among others. In contrast with software-based pseudorandom number generators, the number sequence generated is truly random. Most QRNGs found in the literature are based on single-photon sources and detectors. In this letter, we discuss the advantages and disadvantages of a QRNG that uses only one single-photon detector and weak coherent or thermal states as light source. Keywords: light coherence; photodetectors; quantum computing; quantum optics; random number generation; QRNG; coherent states; light source; quantum random number generator; single-photon detector; thermal states; Detectors; Generators; Light sources; Logic gates; Optical detectors; Photonics; Radiation detectors; Quantum random number generator; coherent and thermal states; single-photon detector

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Safe Coding

Safe Coding



Coding standards encourage programmers to follow a set of uniform rules and guidelines determined by the requirements of the project and organization, rather than by the programmer's personal familiarity or preference. Developers and software designers apply these coding standards during software development to create secure systems. The development of secure coding standards is a work in progress by security researchers, language experts, and software developers. The articles cited here cover topics such as software entropy, traceability, embedded systems, and reliability.

  • Suvrojit Das, Debayan Chatterjee, D. Ghosh, Narayan C. Debnath, "Extracting the System Call Identifier From Within VFS: A Kernel Stack Parsing-Based Approach," International Journal of Information and Computer Security Volume 6 Issue 1, March 2014, (Pages 12-50). (ID#:14-1423) Available at: http://dl.acm.org/citation.cfm?id=2597545.2597547&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper addresses the extraction of system call information from the VFS layer of the Linux kernel. The authors propose a system call identifier method, with view to bolster file timestamp metadata logs. Keywords: (not available).
  • Aggarwal, P.K.; Dharmendra; Jain, P.; Verma, T., "Adaptive approach for Information Hiding in WWW pages," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.113,118, 7-8 Feb. 2014. (ID#:14-1424) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781262&isnumber=6781240 This paper provides a new horizon for safe communication through Information Hiding on Internet. WWW pages steganography is a solution which makes it possible to send data without being altered and intercepted and traced back to sender. Various steganographic techniques have been designed till now which ensures the Integrity and Confidentiality of data maintained in HTML documents. The technique proposed in this research paper is based on the line of the source code of the HTML web pages. This technique does not affect the content of source code and hides the data in the line of the source code without affecting the originality of the source code and the web page. keywords: Internet; data encapsulation; hypermedia markup languages; steganography; HTML document; Internet; WWW pages; adaptive approach; information hiding; steganographic technique; Cryptography; HTML; Head; Ice; Indexes; Embed data; HTML tags; HTML web page; Steganography
  • Richard Baskerville, Paolo Spagnoletti, Jongwoo Kim, " Incident-Centered Information Security: Managing A Strategic Balance Between Prevention And Response," Information and Management, Volume 51 Issue 1, January, 2014, (Pages 138-151). (ID#:14-1425) Available at: http://dl.acm.org/citation.cfm?id=2566268.2566362&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper highlights the importance of achieving balance between information security response and prevention paradigms, which have historically been pitted one over the other as being superior. This paper offers a broad security framework centered on balance between the importance of both paradigms. Case study and results are discussed. Keywords: Case study, Incident-centered analysis, Information security management, Prevention paradigm, Response paradigm, Security balance
  • Traci J. Hess, Anna L. McNab, K. Asli Basoglu, "Reliability Generalization Of Perceived Ease Of Use, Perceived Usefulness, And Behavioral Intentions," MIS Quarterly,Volume 38 Issue 1, March 2014, (Pages 1-1). (ID#:14-1426) Available at: http://dl.acm.org/citation.cfm?id=2600518.2600520&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper details a reliability generalization study conducted on the perceived ease of use, perceived usefulness, and behavioral intentions from a technology acceptance model (TAM). To conduct this study, 380 articles were reviewed and used to perform reliability generalization, resulting in the discovery of differences in reliability coefficients for the aforementioned ease of use, usefulness, and behavioral intentions that make up the three technology acceptance constructs. Keywords: behavioral intentions, ease of use, effect size attenuation, meta-analysis, reliability, reliability generalization, technology acceptance model (TAM), usefulness
  • Philip Axer, Rolf Ernst, Heiko Falk, Alain Girault, Daniel Grund, Nan Guan, Bengt Jonsson, Peter Marwedel, Jan Reineke, Christine Rochange, Maurice Sebastian, Reinhard Von Hanxleden, Reinhard Wilhelm, Wang Yi, "Building Timing Predictable Embedded Systems," ACM Transactions on Embedded Computing Systems (TECS), Volume 13 Issue 4, February 2014 Issue-in-Progress, Article No. 82. (ID#:14-1428) Available at: http://dl.acm.org/citation.cfm?id=2592905.2560033&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper discusses current research on building performant predictable systems. Predictability concerns, in embedded system design, language-based programming approaches for predictable timing, and multicore predictability are discussed. Randomly occurring errors are taken into consideration when the authors discuss predictability in network embedded systems. Keywords: Embedded systems, predictability, resource sharing, safety-critical systems, timing analysis
  • Carol Smidts, Chetan Mutha, Manuel Rodriguez, Matthew J. Gerber, "Software Testing With an Operational Profile: OP Definition," ACM Computing Surveys (CSUR),Volume 46 Issue 3, January 2014 Article No. 39. (ID#:14-1429) Available at: http://dl.acm.org/citation.cfm?id=2578702.2518106&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This article is devoted to the survey, analysis, and classification of operational profiles (OP) that characterize the type and frequency of software inputs and are used in software testing techniques. The survey follows a mixed method based on systematic maps and qualitative analysis. This article is articulated around a main dimension, that is, OP classes, which are a characterization of the OP model and the basis for generating test cases. The classes are organized as a taxonomy composed of common OP features (e.g., profiles, structure, and scenarios), software boundaries (which define the scope of the OP), OP dependencies (such as those of the code or in the field of interest), and OP development (which specifies when and how an OP is developed). To facilitate understanding of the relationships between OP classes and their elements, a meta-model was developed that can be used to support OP standardization. Many open research questions related to OP definition and development are identified based on the survey and classification. Keywords: Software testing, operational profile, software reliability, taxonomy, usage models
  • Jitender Choudhari, Ugrasen Suman, "Extended Iterative Maintenance Life Cycle using eXtreme Programming ," ACM SIGSOFT Software Engineering Notes, Volume 39 Issue 1, January 2014, (Pages 1-12). (ID#:14-1430) Available at: http://dl.acm.org/citation.cfm?id=2557833.2557845&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Software maintenance is the continuous process of enhancing the operational life of software. The existing approaches to software maintenance, derived from the traditional approaches to development, are unable to resolve the problems of unstructured code, team morale, poor visibility of the project, lack of communication, and lack of proper test suites. Alternatively, extreme programming practices such as test driven development, refactoring, pair programming, continuous integration, small releases, and collective ownership help to resolve the aforesaid problems. In this paper, a process model is proposed for software maintenance using extreme programming practices to resolve maintenance issues in an improved manner. The proposed approach speeds up the maintenance process and produces more maintainable code with less effort for future maintenance and evolution. The proposed model is validated by applying it on several maintenance projects in an academic environment. It has been observed that the approach provides higher quality code. The proposed model based on extreme programming enhances both learning and productivity of the team by improving the morale, courage, and confidence of the team, which supports higher motivation during maintenance. Keywords: extreme programming, software maintenance, safe coding
  • Abdallah Qusef, Gabriele Bavota, Rocco Oliveto, Andrea De Lucia, Dave Binkley, "Recovering Test-To-Code Traceability Using Slicing And Textual Analysis," Journal of Systems and Software, Volume 88, February, 2014, (Pages 147-168). (ID#:14-1431) Available at: http://dl.acm.org/citation.cfm?id=2565887.2566083&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Test suites are a valuable source of up-to-date documentation as developers continuously modify them to reflect changes in the production code and preserve an effective regression suite. While maintaining traceability links between unit test and the classes under test can be useful to selectively retest code after a change, the value of having traceability links goes far beyond this potential savings. One key use is to help developers better comprehend the dependencies between tests and classes and help maintain consistency during refactoring. Despite its importance, test-to-code traceability is not common in software development and, when needed, traceability information has to be recovered during software development and evolution. We propose an advanced approach, named SCOTCH+ (Source code and COncept based Test to Code traceability Hunter), to support the developer during the identification of links between unit tests and tested classes. Given a test class, represented by a JUnit class, the approach first exploits dynamic slicing to identify a set of candidate tested classes. Then, external and internal textual information associated with the classes retrieved by slicing is analyzed to refine this set of classes and identify the final set of candidate tested classes. The external information is derived from the analysis of the class name, while internal information is derived from identifiers and comments. The approach is evaluated on five software systems. The results indicate that the accuracy of the proposed approach far exceeds the leading techniques found in the literature. Keywords: Dynamic slicing, Information retrieval, Test-to-code traceability
  • Daniel Perelman, Sumit Gulwani, Dan Grossman, Peter Provost, "Test-driven Synthesis," PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, (Pages 408-418). (ID#:14-1432) Available at: http://dl.acm.org/citation.cfm?id=2594291.2594297&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Programming-by-example technologies empower end-users to create simple programs merely by providing input/output examples. Existing systems are designed around solvers specialized for a specific set of data types or domain-specific language (DSL). We present a program synthesizer which can be parameterized by an arbitrary DSL that may contain conditionals and loops and therefore is able to synthesize programs in any domain. In order to use our synthesizer, the user provides a sequence of increasingly sophisticated input/output examples along with an expert-written DSL definition. These two inputs correspond to the two key ideas that allow our synthesizer to work in arbitrary domains. First, we developed a novel iterative synthesis technique inspired by test-driven development---which also gives our technique the name of test-driven synthesis---where the input/output examples are consumed one at a time as the program is refined. Second, the DSL allows our system to take an efficient component-based approach to enumerating possible programs. We present applications of our synthesis methodology to end-user programming for transformations over strings, XML, and table layouts. We compare our synthesizer on these applications to state-of-the-art DSL-specific synthesizers as well to the general purpose synthesizer Sketch. Keywords: end-user programming, program synthesis, test driven development
  • Luke Stark, Matt Tierney, "Lockbox: Mobility, Privacy and Values in Cloud Storage," Ethics and Information Technology, Volume 16 Issue 1, March 2014, (Pages 1-13). (ID#:14-1433) Available at: http://dl.acm.org/citation.cfm?id=2597586.2597601&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper examines one particular problem of values in cloud computing: how individuals can take advantage of the cloud to store data without compromising their privacy and autonomy. Through the creation of Lockbox, an encrypted cloud storage application, we explore how designers can use reflection in designing for human values to maintain both privacy and usability in the cloud. Keywords: Autonomy, Cloud computing, Cryptography, Human---Computer interaction (HCI), Mobility, Privacy, Reflective Design, Usability, User Empowerment, Values and Design
  • Gerardo Canfora, Luigi Cerulo, Marta Cimitile, Massimiliano Di Penta, "How Changes Affect Software Entropy: An Empirical Study," Empirical Software Engineering, Volume 19 Issue 1, February 2014 (Pages 1-38). (ID#:14-1434) Available at: http://dl.acm.org/citation.cfm?id=2578395.2578409&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Software systems continuously change for various reasons, such as adding new features, fixing bugs, or refactoring. Changes may either increase the source code complexity and disorganization, or help to reducing it. Aim This paper empirically investigates the relationship of source code complexity and disorganization--measured using source code change entropy--with four factors, namely the presence of refactoring activities, the number of developers working on a source code file, the participation of classes in design patterns, and the different kinds of changes occurring on the system, classified in terms of their topics extracted from commit notes. We carried out an exploratory study on an interval of the life-time span of four open source systems, namely ArgoUML, Eclipse-JDT, Mozilla, and Samba, with the aim of analyzing the relationship between the source code change entropy and four factors: refactoring activities, number of contributors for a file, participation of classes in design patterns, and change topics. Results The study shows that (i) the change entropy decreases after refactoring, (ii) files changed by a higher number of developers tend to exhibit a higher change entropy than others, (iii) classes participating in certain design patterns exhibit a higher change entropy than others, and (iv) changes related to different topics exhibit different change entropy, for example bug fixings exhibit a limited change entropy while changes introducing new features exhibit a high change entropy. Conclusions Results provided in this paper indicate that the nature of changes (in particular changes related to refactorings), the software design, and the number of active developers are factors related to change entropy. Our findings contribute to understand the software aging phenomenon and are preliminary to identifying better ways to contrast it. Keywords: Mining software repositories, Software complexity, Software entropy
  • Christos Margiolas, Michael F. P. O'Boyle, "Portable and Transparent Host-Device Communication Optimization for GPGPU Environments," Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, February 2014. (ID#:14-1435) Available at: http://dl.acm.org/citation.cfm?id=2581122.2544156&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 General purpose graphics processors units (GPU) provide the potential for high computational performance with reduced cost and power. Typically they are employed in heterogeneous settings acting as accelerators. Here an application resides on a host multi-core, dispatching work to the GPU. However, workload dispatch is frequently accompanied by large scale data transfers between the host main memory and the dedicated memories of the GPUs. For many applications, memory allocation and communication overhead can severely reduce the benefits of GPU acceleration. This paper develops an approach that reduces host-device communication overhead for OpenCL applications. It does this without modification or recompilation of the application source code and is portable across platforms. It achieves this by tracing and analyzing calls to the runtime made by the application and then selecting the best platform specific memory allocation and communication policy. This approach was applied to 12 existing OpenCL benchmarks from Parboil and Rodinia suites on 3 different platforms where it gives on average a speedup of 1.51, 1.31 and 1.48, respectively. In certain cases, our approach leads up to a factor of three times improvement over current approaches. Keywords: GPU, OpenCL, communication optimization, heterogeneous computing, profiling, runtime, tracing

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Secure File Sharing

Secure File Sharing


Sharing Data leakage while file sharing continues to be a major problem for cybersecurity, especially with the advent of cloud storage. The articles cited here were presented in the first half of 2014 and cover topics including secure storage, cryptosystems, pattern-driven security systems, and access control enforcement.

  • Albahdal, Abdullah A; Alsolami, Fahad; Alsaadi, Fawaz, "Evaluation of Security Supporting Mechanisms in Cloud Storage," Information Technology: New Generations (ITNG), 2014 11th International Conference on, vol., no., pp.285,292, 7-9 April 2014. (ID#:14-1781) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822212&isnumber=6822158 Cloud storage is one of the most promising services of cloud computing. It holds promise for unlimited, scalable, flexible, and low cost data storage. However, security of data stored at the cloud is the main concern that hinders the adoption of cloud storage model. In the literature, there are many proposed mechanisms to improve the security of cloud storage. These proposed mechanisms differ in many aspects and provide different levels of security. In this paper, we evaluate five different mechanisms for supporting the security of the cloud storage. We begin with a brief description of these mechanisms. Then we evaluate these mechanisms based on the following criteria: security, support of writing serializability and reading freshness, workload distribution between the client and cloud, performance, financial cost, support of accountability between the client and cloud, support of file sharing between users, and ease of deployment. The evaluation section of this paper forms a guide for individuals and organizations to select or design an appropriate mechanism that satisfies their requirements for securing cloud storage. Keywords: Availability; Cloud computing; Encryption; Secure storage; Writing; Cloud Computing; Cloud Security; Cloud Storage
  • Cheng-Kang Chu; Chow, S.S.M.; Wen-Guey Tzeng; Jianying Zhou; Deng, R.H., "Key-Aggregate Cryptosystem for Scalable Data Sharing in Cloud Storage," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.468,477, Feb. 2014. (ID#:14-1782) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6497048&isnumber=6689796 Data sharing is an important functionality in cloud storage. In this paper, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems that produce constant-size ciphertexts such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known. Keywords: cloud computing; private key cryptography; public key cryptography; smart cards; storage management; ciphertext set; cloud storage; compact aggregate key; constant-size ciphertexts; data sharing security; decryption rights; file encryption; formal security analysis; key-aggregate cryptosystem; public-key cryptosystems; public-key patient-controlled encryption; scalable data sharing; secret key holder; smart card; Cloud storage; data sharing; key-aggregate encryption; patient-controlled encryption Skil
  • len, A; Mannan, M., "Mobiflage: Deniable Storage Encryptionfor Mobile Devices," Dependable and Secure Computing, IEEE Transactions on , vol.11, no.3, pp.224,237, May-June 2014. (ID#:14-1783) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682886&isnumber=6813632 Data confidentiality can be effectively preserved through encryption. In certain situations, this is inadequate, as users may be coerced into disclosing their decryption keys. Steganographic techniques and deniable encryption algorithms have been devised to hide the very existence of encrypted data. We examine the feasibility and efficacy of deniable encryption for mobile devices. To address obstacles that can compromise plausibly deniable encryption (PDE) in a mobile environment, we design a system called Mobiflage. Mobiflage enables PDE on mobile devices by hiding encrypted volumes within random data in a devices free storage space. We leverage lessons learned from deniable encryption in the desktop environment, and design new countermeasures for threats specific to mobile systems. We provide two implementations for the Android OS, to assess the feasibility and performance of Mobiflage on different hardware profiles. MF-SD is designed for use on devices with FAT32 removable SD cards. Our MF-MTP variant supports devices that instead share a single internal partition for both apps and user accessible data. MF-MTP leverages certain Ext4 file system mechanisms and uses an adjusted data-block allocator. These new techniques for sorting hidden volumes in Ext4 file systems can also be applied to other file systems to enable deniable encryption for desktop OSes and other mobile platforms. Keywords: Androids; Encryption; Humanoid robots; Law; Mobile communication; Mobile handsets; File system security; deniable encryption; mobile platform security; storage encryption
  • Uzunov, Anton V.; Fernandez, Eduardo B.; Falkner, Katrina, "A Comprehensive Pattern-Driven Security Methodology for Distributed Systems," Software Engineering Conference (ASWEC), 2014 23rd Australian, vol., no., pp.142, 151, 7-10 April 2014. (ID#:14-1784) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824119&isnumber=6824087 Incorporating security features is one of the most important and challenging tasks in designing distributed systems. Over the last decade, researchers and practitioners have come to recognize that the incorporation of security features should proceed by means of a systematic approach, combining principles from both software and security engineering. Such systematic approaches, particularly those implying some sort of process aligned with the development life-cycle, are termed security methodologies. One of the most important classes of such methodologies is based on the use of security patterns. While the literature presents a number of pattern-driven security methodologies, none of them are designed specifically for general distributed systems. Going further, there are also currently no methodologies with mixed specific applicability, e.g. for both general and peer-to-peer distributed systems. In this paper we aim to fill these gaps by presenting a comprehensive pattern-driven security methodology specifically designed for general distributed systems, which is also capable of taking into account the specifics of peer-to-peer systems. Our methodology takes the principle of encapsulation several steps further, by employing patterns not only for the incorporation of security features (via security solution frames), but also for the modeling of threats, and even as part of its process. We illustrate and evaluate the presented methodology via a realistic example -- the development of a distributed system for file sharing and collaborative editing. In both the presentation of the methodology and example our focus is on the early life-cycle phases (analysis and design). Keywords: Analytical models; Computer architecture; Context; Object oriented modeling; Security; Software; Taxonomy; distributed systems security; secure software engineering; security methodologies; security patterns; security solution frames; threat patterns
  • Kaaniche, Nesrine; Laurent, Maryline, "A Secure Client Side Deduplication Scheme in Cloud Storage Environments," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp.1,7, March 30 2014-April 2 2014. (ID#:14-1785) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814002&isnumber=6813963 Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on OpenStack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key. Keywords: (Not provided)
  • Alsolami, Fahad; Boult, Terrance E., "CloudStash: Using Secret-Sharing Scheme to Secure Data, Not Keys, in Multi-clouds," Information Technology: New Generations (ITNG), 2014 11th International Conference on , vol., no., pp.315,320, 7-9 April 2014. (ID#:14-1786) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822216&isnumber=6822158 Cloud storages have many exciting features that attract many individuals and organizations for storing and sharing data over the cloud. However, security and key management are still remaining the highlighted concerns in cloud storage. Managing/protecting keys is a problem for existing approaches, and opens the risk of attackers working to offline brute-force crack the decryption and/or surreptitiously obtaining the key and using it offline. To address these issues, we propose the Cloud Stash scheme, a system that applied the secret-sharing scheme directly on the file to store multi-shares of a file into multi-clouds. Cloud Stash utilizes secret-sharing, low cost cloud storages and multi-threading to improve confidentiality, availability, performance and fault tolerance. Cloud Stash achieves this improvement by splitting a file into multi-shares of secret and distributing these multi-shares into multi-clouds simultaneously where threshold shares are required to reconstruct the file. Our experiments show that Cloud Stash is statistically significantly faster for small files, and even for large files the added cost is not statistically worse. So the added security benefits are nearly free from the users' perspective. Keywords: Availability; Cloud computing; Encryption; Nickel; Standards; Cloud storage security; key management; mutli-clouds; performance; secret-sharing
  • Yukyeong Wi, Jin Kwak, "Secure Data Management Scheme In The Cloud Data Center," International Journal of Advanced Media and Communication, Volume 5 Issue 2/3, April 2014, Pages 225-232. (ID#:14-1788) URL: http://dl.acm.org/citation.cfm?id=2608768.2608779&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1109/TNET.2012.2210729 Recently, the research about the cloud computing service focused on the data synchronization to various devices of the users when he or she does at anywhere and anytime. Also, secure and effective management and sharing technology is needed for cloud data center's stored data to securely provide data synchronization service. However, in cloud data center, there are the potential for that to security concern from the internal storage unauthorized access by malicious attacker such as stored data forgery, leakage and the upload of the unauthorized data. Therefore, in this paper, we propose a secure data management scheme in the cloud data center by categorization of the data e.g., importance, types, file size and so on. Keywords: (not provided)
  • Dinh Tien Tuan Anh, Anwitaman Datta, "Streamforce: Outsourcing Access Control Enforcement For Stream Data To The Clouds," CODASPY '14 Proceedings of the 4th ACM Conference On Data And Application Security And Privacy , March 2014, Pages 13-24. (ID#:14-1789) URL: http://dl.acm.org/citation.cfm?id=2557547.2557556&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1145/2557547.2557556 In this paper, we focus on the problem of data privacy on the cloud, particularly on access controls over stream data. The nature of stream data and the complexity of sharing data make access control a more challenging issue than in traditional archival databases. We present Streamforce -- a system allowing data owners to securely outsource their data to an untrusted (curious-but-honest) cloud. The owner specifies fine-grained policies which are enforced by the cloud. The latter performs most of the heavy computations, while learning nothing about the data content. To this end, we employ a number of encryption schemes, including deterministic encryption, proxy-based attribute based encryption and sliding-window encryption. In Streamforce, access control policies are modeled as secure continuous queries, which entails minimal changes to existing stream processing engines, and allows for easy expression of a wide-range of policies. In particular, Streamforce comes with a number of secure query operators including Map, Filter, Join and Aggregate. Finally, we implement Streamforce over an open-source stream processing engine (Esper) and evaluate its performance on a cloud platform. The results demonstrate practical performance for many real-world applications, and although the security overhead is visible, Streamforce is highly scalable. Keywords: access control, cloud computing, outsourced databases, stream processing
  • Junbeom Hur, Kyungtae Kang, "Secure Data Retrieval for Decentralized Disruption-Tolerant Military Networks," IEEE/ACM Transactions on Networking (TON), Volume 22 Issue 1, February 2014, Page 16-26. (ID#:14-1790) URL: http://dl.acm.org/citation.cfm?id=2591204.2591205&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1109/TNET.2012.2210729 Mobile nodes in military environments such as a battlefield or a hostile region are likely to suffer from intermittent network connectivity and frequent partitions. Disruption-tolerant network (DTN) technologies are becoming successful solutions that allow wireless devices carried by soldiers to communicate with each other and access the confidential information or command reliably by exploiting external storage nodes. Some of the most challenging issues in this scenario are the enforcement of authorization policies and the policies update for secure data retrieval. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic solution to the access control issues. However, the problem of applying CP-ABE in decentralized DTNs introduces several security and privacy challenges with regard to the attribute revocation, key escrow, and coordination of attributes issued from different authorities. In this paper, we propose a secure data retrieval scheme using CP-ABE for decentralized DTNs where multiple key authorities manage their attributes independently. We demonstrate how to apply the proposed mechanism to securely and efficiently manage the confidential data distributed in the disruption-tolerant military network. Keywords: (not provided)
  • Qin Liu, Guojun Wang, Jie Wu, "Time-based Proxy Re-Encryption Scheme For Secure Data Sharing In A Cloud Environment," Information Sciences: an International Journal, Volume 258, February, 2014, Pages 355-370. (ID#:14-1791) URL: http://dl.acm.org/citation.cfm?id=2563733.2564106&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1016/j.ins.2012.09.034 To simultaneously achieve fine-grained access control on encrypted data and scalable user revocation, existing work combines attribute-based encryption (ABE) and proxy re-encryption (PRE) to delegate the cloud service provider (CSP) to execute re-encryption. Keywords: Attribute-based encryption, Cloud computing, Proxy re-encryption, Time
  • Mordechai Guri, Gabi Kedma, Buky Carmeli, Yuval Elovici, "Limiting Access To Unintentionally Leaked Sensitive Documents Using Malware Signatures," (ID#:14-1793) URL: http://dl.acm.org/citation.cfm?id=2613087.2613103&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1145/2613087.2613103 Organizations are repeatedly embarrassed when their sensitive digital documents go public or fall into the hands of adversaries, often as a result of unintentional or inadvertent leakage. Such leakage has been traditionally handled either by preventive means, which are evidently not hermetic, or by punitive measures taken after the main damage has already been done. Yet, the challenge of preventing a leaked file from spreading further among computers and over the Internet is not resolved by existing approaches. This paper presents a novel method, which aims at reducing and limiting the potential damage of a leakage that has already occurred. The main idea is to tag sensitive documents within the organization's boundaries by attaching a benign detectable malware signature (DMS). While the DMS is masked inside the organization, if a tagged document is somehow leaked out of the organization's boundaries, common security services such as Anti-Virus (AV) programs, firewalls or email gateways will detect the file as a real threat and will consequently delete or quarantine it, preventing it from spreading further. This paper discusses various aspects of the DMS, such as signature type and attachment techniques, along with proper design considerations and implementation issues. The proposed method was implemented and successfully tested on various file types including documents, spreadsheets, presentations, images, executable binaries and textual source code. The evaluation results have demonstrated its effectiveness in limiting the spread of leaked documents. Keywords: anti-virus program, data leakage, detectable malware signature, sensitive document
  • John Criswell, Nathan Dautenhahn, Vikram Adve, "Virtual Ghost: Protecting Applications From Hostile Operating Systems," ASPLOS '14 Proceedings of the 19th International Conference On Architectural Support For Programming Languages And Operating Systems , February 2014, Pages 81-96. (ID#:14-1794) URL: http://dl.acm.org/citation.cfm?id=2541940.2541986&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 or http://dx.doi.org/10.1145/2541940.2541986 Applications that process sensitive data can be carefully designed and validated to be difficult to attack, but they are usually run on monolithic, commodity operating systems, which may be less secure. An OS compromise gives the attacker complete access to all of an application's data, regardless of how well the application is built. We propose a new system, Virtual Ghost, that protects applications from a compromised or even hostile OS. Virtual Ghost is the first system to do so by combining compiler instrumentation and run-time checks on operating system code, which it uses to create ghost memory that the operating system cannot read or write. Virtual Ghost interposes a thin hardware abstraction layer between the kernel and the hardware that provides a set of operations that the kernel must use to manipulate hardware, and provides a few trusted services for secure applications such as ghost memory management, encryption and signing services, and key management. Unlike previous solutions, Virtual Ghost does not use a higher privilege level than the kernel. Virtual Ghost performs well compared to previous approaches; it outperforms InkTag on five out of seven of the LMBench microbenchmarks with improvements between 1.3x and 14.3x. For network downloads, Virtual Ghost experiences a 45% reduction in bandwidth at most for small files and nearly no reduction in bandwidth for large files and web traffic. An application we modified to use ghost memory shows a maximum additional overhead of 5% due to the Virtual Ghost protections. We also demonstrate Virtual Ghost's efficacy by showing how it defeats sophisticated rootkit attacks. Keywords: control-flow integrity, inlined reference monitors, malicious operating systems, software fault isolation, software security

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Swarm Intelligence Security

Swarm Intelligence Security


Swarm Intelligence is a concept using the metaphor of insect colonies to describe decentralized, self-organized systems. The method is often used in artificial intelligence, and there are about a dozen variants ranging from ant colony optimization to stochastic diffusion. For cybersecurity, these systems have significant value both offensively and defensively. The research includes focus on botnets and malware, intrusion detection, cryptanalysis and security risk analysis. The works cited below were published in the first half of 2014.

  • Dadhich, A; Gupta, A; Yadav, S., "Swarm Intelligence based linear cryptanalysis of four-round Data Encryption Standard algorithm," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.378,383, 7-8 Feb. 2014. (ID#:14-1807) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781312&isnumber=6781240 The proliferation of computers, internet and wireless communication capabilities into the physical world has led to ubiquitous availability of computing infrastructure. With the expanding number and type of internet capable devices and the enlarged physical space of distributed and cloud computing, computer systems are evolving into complex and pervasive networks. Amidst the aforesaid rapid growth in technology, secure transmission of data is also equally important. The amount of sensitive information deposited and transmitted over the internet is absolutely critical and needs principles that enforce legal and restricted use and interpretation of data. The data needs to be protected from eavesdroppers and potential attackers who undermine the security processes and perform actions in excess of their permissions. Cryptography algorithms form a central component of the security mechanisms used to safeguard network transmissions and data storage. As the encrypted data security largely depends on the techniques applied to create, manage and distribute the keys, therefore a cryptographic algorithm might be rendered useless due to poor management of the keys. This paper presents a novel computational intelligence based approach for known ciphertext-only cryptanalysis of four-round Data Encryption Standard algorithm. In ciphertext-only attack, the encryption algorithm used and the ciphertext to be decoded are known to cryptanalyst and is termed as the most difficult attack encountered in cryptanalysis. The proposed approach uses Swarm Intelligences to deduce optimum keys according to their fitness values and identifies the best keys through a statistical probability based fitness function. The results suggest that the proposed approach is intelligent in finding missing key bits of the Data Encryption Standard algorithm. Keywords: cloud computing; cryptography; probability; statistical analysis; swarm intelligence; Internet; ciphertext-only attack; ciphertext-only cryptanalysis; cloud computing; computational intelligence based approach; cryptography algorithms; data storage; distributed computing; four-round data encryption standard algorithm; network transmissions; secure data transmission; statistical probability based fitness function; swarm intelligence based linear cryptanalysis; Cryptography;MATLAB;NIST;Ciphertext;Cryptanalysis;Cryptography;Information Security; Language model; Particle Swarm Optimization; Plaintext; Swarm Intelligence
  • Fink, Glenn A; Haack, Jereme N.; McKinnon, ADavid; Fulp, Errin W., "Defense on the Move: Ant-Based Cyber Defense," Security & Privacy, IEEE , vol.12, no.2, pp.36,43, Mar.-Apr. 2014. (ID#:14-1808) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798536&isnumber=6798534 Many common cyberdefenses (like firewalls and intrusion-detection systems) are static, giving attackers the freedom to probe them at will. Moving-target defense (MTD) adds dynamism, putting the systems to be defended in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers' ability to rely on prior experience without requiring motion in the protected infrastructure. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The authors' mobile resilient defense, Ant-Based Cyber Defense (ABCD), is a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. ABCD provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures such as the smart electric grid. Keywords: Computer crime; Computer security; Cyberspace; Database systems; Detectors; Malware; Mobile communication; Particle swarm intelligence; Statistics; Target tracking; MTD; cybersecurity; digital ants; moving-target defense; swarm intelligence
  • Aniello Castiglione, Roberto De Prisco, Alfredo De Santis, Ugo Fiore, Francesco Palmieri, "A Botnet-Based Command And Control Approach Relying On Swarm Intelligence," Journal of Network and Computer Applications, Volume 38, February, 2014, Pages 22-33. (ID#:14-1809) URL: http://dl.acm.org/citation.cfm?id=2567003.2567217&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 This work features a new botnet-based command and control architecture, a solution to the current survivability and scalability challenges of ubiquitous networked systems deployed in questionable communication contexts. This new architecture aims to omit rigid master-slave relationships, and to autonomize bot operating roles. The architecture relies on swarm intelligence, especially stigmergic communication, with a view to provide fault tolerance, dynamic adaptation, and impromptu yet absolute coordination and collaboration among the autonomous bot agents. Keywords: Ant colony optimization, Botnets, Command and control, Malware-based management for homeland defense, Stigmergy, Swarm intelligence
  • Abhishek Gupta, Om Jee Pandey, Mahendra Shukla, Anjali Dadhich, Anup Ingle, Vishal Ambhore, "Intelligent Perpetual Echo Attack Detection on User Datagram Protocol Port 7 Using Ant Colony Optimization," ICESC '14 Proceedings of the 2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies, January 2014, Pages 419-424. (ID#:14-1810) URL: http://dl.acm.org/citation.cfm?id=2586119.2587455&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 The escalating complexity of computer networks on a daily basis has increased the probability of malicious exploitation. Even a rare vulnerability in a single computer might compromise the network security of an entire organisation. Intrusion Detection Systems form an integral component of the mechanisms designed to prevent internet and data communication systems from such attacks. The attacks on the network comprise of information gathering and modification through unauthorized access to resources and denial of service to legitimate users. IDS play a key role in detecting the patterns of behaviour on the network that might be indicative of impending attacks. Majority of groundbreaking research on IDS is carried out onKDD'99 dataset and focuses on either all the attacks in the network or the attacks corresponding to TCP/IP protocol. This paper presents a step forward in this direction where the IDS model addresses a specific part of the network attacks commonly detected at port 7 in UDP. Port scans in UDP account for a sizable portion of the internet traffic and comparatively little research characterizes security in UDP port scan activity. To meet the growing trend of attacks and other security challenges in the constantly evolving internet arena, this is paper presents a computationally intelligent intrusion detection mechanism using swarm intelligence paradigm, particularly ant colony optimisation, to analyze sample network traces in UDP port scans. This work aims at generating customised and efficient network intrusion detection systems using soft computing to increase general network security through specific network security. Keywords: Intrusion Detection Systems (IDS), port scans, User Datagram Protocol (UDP), network security, attacks, Ant Colony Optimisation (ACO), perpetual echo
  • Alexandros Giagkos, Myra S. Wilson, "BeeIP - A Swarm Intelligence Based Routing For Wireless Ad Hoc Networks," Information Sciences: an International Journal, Volume 265, May, 2014, Pages 23-35. (ID#:14-1811) URL: http://dl.acm.org/citation.cfm?id=2580107.2580277&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 This paper takes a detailed look at Swarm Intelligence-based routing protocols, as well as a newly-proposed routing protocol which aims to deliver wireless ad hoc multi-path routing for mobile nodes. Keywords: Ant-inspired, Bee-inspired, Mobile ad hoc network, Sensor network, Swarm intelligence, Wireless
  • Fangjun Kuang, Weihong Xu, Siyang Zhang, "A Novel Hybrid KPCA and SVM With GA Model For Intrusion Detection," Applied Soft Computing, Volume 18, May, 2014, Pages 178-184. (ID#:14-1812) URL: http://dl.acm.org/citation.cfm?id=2611832.2611904&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 The authors of this paper discuss a propose an intrusion-detection support vector machine (SVM) concept that combines kernel principal component analysis (KPCA) with genetic algorithm (GA). Results of experimentation are detailed, explaining how the proposed model provides higher predictive accuracy, rapid convergence speed, and significantly improved generalization. Keywords: Genetic algorithm, Intrusion detection, Kernel function, Kernel principal component analysis, Support vector machines
  • Nan Feng, Harry Jiannan Wang, Minqiang Li, "A Security Risk Analysis Model for Information Systems: Causal Relationships Of Risk Factors And Vulnerability Propagation Analysis," Information Sciences: an International Journal, Volume 256, January, 2014, Pages 57-73. (ID#:14-1813) URL: http://dl.acm.org/citation.cfm?id=2542832.2543066&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 A novel security risk analysis model (SRAM) is proposed in this paper, which aims to help identify casual correlations between risk factors, as well as to analyze the challenges associated with vulnerability propagation. Keywords: Ant colony optimization, Bayesian networks, Information systems, Security risk, Vulnerability propagation
  • Joanna Kolodziej, Samee Ullah Khan, Lizhe Wang, Marek Kisiel-Dorohinicki, Sajjad A. Madani, Ewa Niewiadomska-Szynkiewicz, Albert Y. Zomaya, Cheng-Zhong Xu, "Security, Energy, And Performance-Aware Resource Allocation Mechanisms For Computational Grids," Future Generation Computer Systems, Volume 31, February, 2014, Pages 77-92. (ID#:14-1814) URL: http://dl.acm.org/citation.cfm?id=2564944.2565285&coll=DL&dl=GUIDE&CFID=390598023&CFTOKEN=68395339 This paper recognizes the challenges of modeling relationships between computing hardware and physical environments for Distributed Cyber Physical Systems (DCPSs), to ensure efficiency, thermal safety, and continuous operation. The authors of this paper use the Dynamic Voltage Scaling (DVS) methodology to reduce power strain by system resources. Discussed are the developed algorithms and heuristics, and their effectiveness as a solution, for the optimization challenge. Keywords: Distributed cyber physical systems, Dynamic voltage scaling, Energy optimization, Evolutionary algorithm, Resource reliability, Scheduling, Secure computational grid
  • Gupta, A; Pandey, O.J.; Shukla, M.; Dadhich, A; Ingle, A; Ambhore, V., "Intelligent Perpetual Echo Attack Detection on User Datagram Protocol Port 7 Using Ant Colony Optimization," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.419,424, 9-11 Jan. 2014. (ID#:14-1815) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745415&isnumber=6745317 The escalating complexity of computer networks on a daily basis has increased the probability of malicious exploitation. Even a rare vulnerability in a single computer might compromise the network security of an entire organisation. Intrusion Detection Systems form an integral component of the mechanisms designed to prevent internet and data communication systems from such attacks. The attacks on the network comprise of information gathering and modification through unauthorized access to resources and denial of service to legitimate users. IDS play a key role in detecting the patterns of behaviour on the network that might be indicative of impending attacks. Majority of groundbreaking research on IDS is carried out on KDD'99 dataset and focuses on either all the attacks in the network or the attacks corresponding to TCP/IP protocol. This paper presents a step forward in this direction where the IDS model addresses a specific part of the network attacks commonly detected at port 7 in UDP. Port scans in UDP account for a sizable portion of the Internet traffic and comparatively little research characterizes security in UDP port scan activity. To meet the growing trend of attacks and other security challenges in the constantly evolving internet arena, this is paper presents a computationally intelligent intrusion detection mechanism using swarm intelligence paradigm, particularly ant colony optimization, to analyze sample network traces in UDP port scans. This work aims at generating customized and efficient network intrusion detection systems using soft computing to increase general network security through specific network security. Keywords: ant colony optimization; computer network security; transport protocols; Internet traffic; TCP/IP protocol; ant colony optimization; computer network security; computer networks escalating complexity; denial of service; intelligent intrusion detection mechanism; intelligent perpetual echo attack detection ;malicious exploitation probability; unauthorized access; user datagram protocol port 7;Computers;Internet;Intrusion detection; Ports (Computers);Protocols; Real-time systems; Ant Colony Optimization (ACO);Intrusion Detection Systems (IDS);User Datagram Protocol (UDP);attacks; network security; perpetual echo; port scans
  • Zhongshan Zhang; Keping Long; Jianping Wang; Dressler, F., "On Swarm Intelligence Inspired Self-Organized Networking: Its Bionic Mechanisms, Designing Principles and Optimization Approaches," Communications Surveys & Tutorials, IEEE , vol.16, no.1, pp.513,537, First Quarter 2014. (ID#:14-1816) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553299&isnumber=6734841 Inspired by swarm intelligence observed in social species, the artificial self-organized networking (SON) systems are expected to exhibit some intelligent features (e.g., flexibility, robustness, decentralized control, and self-evolution, etc.) that may have made social species so successful in the biosphere. Self-organized networks with swarm intelligence as one possible solution have attracted a lot of attention from both academia and industry. In this paper, we survey different aspects of bio-inspired mechanisms and examine various algorithms that have been applied to artificial SON systems. The existing well-known bio-inspired algorithms such as pulse-coupled oscillators (PCO)-based synchronization, ant- and/or bee-inspired cooperation and division of labor, immune systems inspired network security and Ant Colony Optimization (ACO)-based multipath routing have been surveyed and compared. The main contributions of this survey include 1) providing principles and optimization approaches of variant bio-inspired algorithms, 2) surveying and comparing critical SON issues from the perspective of physical-layer, Media Access Control (MAC)-layer and network-layer operations, and 3) discussing advantages, drawbacks, and further design challenges of variant algorithms, and then identifying their new directions and applications. In consideration of the development trends of communications networks (e.g., large-scale, heterogeneity, spectrum scarcity, etc.), some open research issues, including SON designing tradeoffs, Self-X capabilities in the 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE)/LTE-Advanced systems, cognitive machine-to-machine (M2M) self-optimization, cross-layer design, resource scheduling, and power control, etc., are also discussed in this survey. Keywords: 3G mobile communication; Long Term Evolution; ant colony optimization; cooperative communication; oscillators; power control; scheduling; synchronization; telecommunication network routing; telecommunication security;3GPP;3rd Generation Partnership Project; LTE-Advanced systems; Long Term Evolution; MAC layer; PCO; ant colony optimization-based multipath routing; ant-inspired cooperation; artificial SON systems; artificial self-organized networking; bee-inspired cooperation; bio-inspired mechanisms; bionic mechanisms; cognitive machine-to-machine self-optimization; cross-layer design; immune systems inspired network security; media access control layer; network layer; physical layer; power control; pulse coupled oscillators-based synchronization; resource scheduling; swarm intelligence inspired self-organized networking; Adaptive Routing; Bio-Inspired; Cognitive Radio; Cooperation; Heterogeneous; Load Balancing; Machine-to-Machine; Network Security; Self-Organized Networking; Swarm Intelligence; Synchronization
  • Enache, Adriana-Cristina; Patriciu, Victor Valeriu, "Intrusions detection based on Support Vector Machine optimized with swarm intelligence," Applied Computational Intelligence and Informatics (SACI), 2014 IEEE 9th International Symposium on , vol., no., pp.153,158, 15-17 May 2014. (ID#:14-1817) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840052&isnumber=6840030 Intrusion Detection Systems(IDS) have become a necessary component of almost every security infrastructure. Recently, Support Vector Machines (SVM) has been employed to provide potential solutions for IDS. With its many variants for classification SVM is a state-of-the-art machine learning algorithm. However, the performance of SVM depends on selection of the appropriate parameters. In this paper we propose an IDS model based on Information Gain for feature selection combined with the SVM classifier. The parameters for SVM will be selected by a swarm intelligence algorithm (Particle Swarm Optimization or Artificial Bee Colony). We use the NSL-KDD data set and show that our model can achieve higher detection rate and lower false alarm rate than regular SVM. Keywords: ABC and NSL-KDD; Intrusion Detection; PSO; SVM
  • Weiming Hu; Jun Gao; Yanguo Wang; Ou Wu; Maybank, S., "Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection," Cybernetics, IEEE Transactions on , vol.44, no.1, pp.66,82, Jan. 2014 (ID#:14-1818) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488798&isnumber=6683070 Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types. Keywords: Gaussian processes; computer architecture; computer network security; distributed processing; learning (artificial intelligence); particle swarm optimization; support vector machines; GMM; PSO;SVM-based algorithm; distributed architectures; dynamic distributed network intrusion detection; local parameterized detection model; network attack detection; network information security; online Adaboost process; online Adaboost-based intrusion detection algorithms; online Adaboost-based parameterized methods; online Gaussian mixture models; particle swarm optimization; support vector machines; weak classifiers; Dynamic distributed detection; network intrusions; online Adaboost learning; parameterized model
  • Zafar, S.; Soni, M.K., "Trust based QOS protocol(TBQP) using meta-heuristic genetic algorithm for optimizing and securing MANET," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.173,177, 6-8 Feb. 2014. (ID#:14-1819) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798315&isnumber=6798279 This paper includes a prospective approach of developing a trust based QOS protocol (TBQP) using meta-heuristic genetic algorithm for optimizing and securing MANET. Genetic algorithm will help in maintaining Quality of Service(QOS) by selecting a fittest i.e. shortest route hence providing better performance. Intelligent optimization approaches or meta heuristic algorithms like, genetic algorithm(GA), neural networks(NN) based on artificial intelligence(AI), particle swarm optimization(PSO) technique and simulated annealing(SA) in the recent years have well consigned QOS issues. Ad-hoc networks face the primary defiance of restraining attacks against data, like, unauthorized data modification impersonation etc caused by malicious nodes in the network. This problem is tackled by trust application in our proposed approach which helps in securing ad-hoc networks. Keywords: artificial intelligence; genetic algorithms; mobile ad hoc networks; mobile computing ;neural nets; protocols; quality of service; simulated annealing; telecommunication security; AI; MANET optimization; MANET security; PSO; TBQP; ad-hoc networks; artificial intelligence; impersonation; intelligent optimization approaches; malicious nodes; metaheuristic genetic algorithm; neural networks; particle swarm optimization; quality of service; route; simulated annealing; trust application; trust based QOS protocol; unauthorized data modification; Ad hoc networks; Encryption; Face; Mobile communication; Mobile computing; Quality of service; Routing; Meta-Heuristic Algorithm; security challenges confronted by MANET; trust; user authentication

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Theoretical Cryptography

Theoretical Cryptography


Cryptography can only exist if there is a mathematical hardness to it so constructed as to be able to maintain a desired functionality, even under malicious attempts to change or destroy the prescribed functionality. Hence the foundations of theoretical cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural ``security concerns' mathematically using probability-based definitions, various constructions, complexity theoretic primitives and proofs of security. Research into theoretical cryptography addresses the question of how to get from X to Z without allowing the adversary to go backwards from Z to X. The work presented here covers a range of approaches and methods, including block ciphers, random grids, obfuscation, and provable security .

  • Xiaotian Wu, Wei Sun, "Improved Tagged Visual Cryptography By Random Grids," Signal Processing, Volume 97, April, 2014, Pages 64-82. (ID#:14-1820) Available at: http://dl.acm.org/citation.cfm?id=2565617.2565792&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 This paper introduces a (k,n) Tagged Visual Cryoptopgraphy (TVC) using the concept of a random grid (RG). This methods offers a solution to the challenges of (k,n) TVC, such as pixel expansion and code-book needed encoding. Cheating prevention is also a capability. Keywords: Cheat preventing, Random grid, Tagged-share, Visual cryptography, Visual secret sharing
  • Andrey Bogdanov, Elif Bilge Kavun, Elmar Tischhauser, Tolga Yalcin, "Large-Scale High-Resolution Computational Validation of Novel Complexity Models In Linear Cryptanalysis," Journal of Computational and Applied Mathematics, Volume 259, March, 2014, Pages 592-598. (ID#:14-1821) Available at: http://dl.acm.org/citation.cfm?id=2542822.2563814&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 This paper analyzes the challenge of a new theoretical model that evaluates complexities of linear cryptanalysis attacks, based on an enhanced wrong key randomization hypothesis. This paper attempts to study this model in terms of large ciphers (32 bits and above), as the model has only been verified for 20-bit block size ciphers. Keywords: Block ciphers, Data complexity, Linear cryptanalysis, Wrong key randomization hypothesis, Theoretical cryptography
  • Vipul Goyal, Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti, "Black-box Non-Black-Box Zero Knowledge," STOC '14 Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 515-524. (ID#:14-1822) Available at: http://dl.acm.org/citation.cfm?id=2591796.2591879&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 Motivated by theoretical and practical interest, the challenging task of designing cryptographic protocols having only black-box access to primitives has generated various breakthroughs in the last decade. Despite such positive results, even though nowadays we know black-box constructions for secure two-party and multi-party computation even in constant rounds, there still are in Cryptography several constructions that critically require non-black-box use of primitives in order to securely realize some fundamental tasks. As such, the study of the gap between black-box and nonblack-box constructions still includes major open questions. In this work we make progress towards filling the above gap. We consider the case of black-box constructions for computations requiring that even the size of the input of a player remains hidden. We show how to commit to a string of arbitrary size and to prove statements over the bits of the string. Both the commitment and the proof are succinct, hide the input size and use standard primitives in a blackbox way. We achieve such a result by giving a black-box construction of an extendable Merkle tree that relies on a novel use of the "MPC in the head" paradigm of Ishai et al. [STOC 2007]. We show the power of our new techniques by giving the first black-box constant-round public-coin zero knowledge argument for NP. To achieve this result we use the nonblack-box simulation technique introduced by Barak [FOCS 2001], the PCP of Proximity introduced by Ben-Sasson et al. [STOC 2004], together with a black-box public-coin witness indistinguishable universal argument that we construct along the way. Keywords: black-box use of primitives, cryptography, input-size hiding protocols, public-coin zero-knowledge, Theoretical cryptography
  • Shafi Goldwasser, Guy N. Rothblum, "On Best-Possible Obfuscation," Journal of Cryptology, Volume 27 Issue 3, July 2014, Pages 480-505. (ID#:14-1823) Available at: http://dl.acm.org/citation.cfm?id=2628702.2628716&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 This work explores the notion of best-possible obfuscation, which rivals current black-box obfuscation. Contrary to black-box obfuscation, which leaks no information, best-possible obfuscation leaks as little information as any other program of similar size and functionality. The idea is that any information not hidden by the obfuscated program is similarly not hidden by any other comparable program. Results of the study and discussed in detail.
  • Keita Emura, Goichiro Hanaoka, Yunlei Zhao, Proceedings Of The 2nd ACM Workshop On ASIA Public-Key Cryptography, Kyoto, Japan -- June 03 - 06, 2014. (ID#:14-1824) Available at: http://dl.acm.org/citation.cfm?id=2600694&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 It is our great pleasure to welcome you to The 2nd ACM Asia Public-Key Cryptography Workshop -- AsiaPKC'14, held on June 3, 2014, in conjunction with The 9th ACM Symposium on Information, Computer and Communications Security (ASIACCS'14). Public key cryptography plays an essential role in ensuring many security properties required in data processing of various kinds. The theme of this workshop is novel public key cryptosystems for solving a wide range of real-life application problems. This workshop solicits original contributions on both applied and theoretical aspects of public key cryptography. The call for papers attracted 22 submissions from Asia, Europe, North America, and South America. The program committee accepted 6 papers based on their overall quality and novelty (acceptance ratio: 27%). In addition, the program includes an invited talk by Dr. Miyako Ohkubo of the National Institute of Information and Communications Technology (NICT), Japan ("Introduction of Structure-Preserving Signatures"). We hope these proceedings will serve as a valuable reference for researchers and practitioners in the field of public-key cryptography and its applications. Keywords: Theoretical cryptography
  • Michael Backes, Catalin Hritcu, Matteo Maffei, "Union, Intersection And Refinement Types And Reasoning About Type Disjointness For Secure Protocol Implementations," Journal of Computer Security - Foundational Aspects of Security, Volume 22 Issue 2, March 2014, Pages 301-353. (ID#:14-1825) Available at: http://dl.acm.org/citation.cfm?id=2595841.2595845&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 This article examines a new system for type-based analyses of reference protocol implementations. The new type systems includes refinement types, union, intersection, and polymorphic types, which can statistically identify increased usages of asymmetric cryptography, authenticity, integrity, and applications based on zero-knowledge proofs. Keywords: Concurrent Lambda-Calculus, Intersection Types, Mechanized Metatheory, Reference Implementations, Refinement Types, Security Protocols, Type Systems, Union Types, Verification, Zero-Knowledge Proofs
  • Jiqiang Lu, Yongzhuang Wei, Jongsung Kim, Enes Pasalic, "The Higher-Order Meet-In-The-Middle Attack and Its Application To The Camellia Block Cipher," Theoretical Computer Science, Volume 527, March, 2014, Pages 102-122. (ID#:14-1826) Available at: http://dl.acm.org/citation.cfm?id=2608869.2609228&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 The Camellia block cipher has a 128-bit block length, a user key of 128, 192 or 256 bits long, and a total of 18 rounds for a 128-bit key and 24 rounds for a 192 or 256-bit key. It is a Japanese CRYPTREC-recommended e-government cipher, a European NESSIE selected cipher and an ISO international standard. The meet-in-the-middle attack is a technique for analysing the security of a block cipher. In this paper, we propose an extension of the meet-in-the-middle attack, which we call the higher-order meet-in-the-middle (HO-MitM) attack; the core idea of the HO-MitM attack is to use multiple plaintexts to cancel some key-dependent component(s) or parameter(s) when constructing a basic unit of ''value-in-the-middle''. Then we introduce a novel approach, which combines integral cryptanalysis with the meet-in-the-middle attack, to construct HO-MitM attacks on 10-round Camellia with the FL/FL^-^1 functions under 128 key bits, 11-round Camellia with the FL/FL^-^1 functions under 192 key bits and 12-round Camellia with the FL/FL^-^1 functions under 256 key bits. Finally, we apply an existing approach to construct HO-MitM attacks on 14-round Camellia without the FL/FL^-^1 functions under 192 key bits and 16-round Camellia without the FL/FL^-^1 functions under 256 key bits. The HO-MitM attack can potentially be used to cryptanalyse other block ciphers. Keywords: Block cipher, Camellia, Cryptology, Integral cryptanalysis, Meet-in-the-middle attack
  • Shaohua Tang, Lingling Xu, "Towards Provably Secure Proxy Signature Scheme Based On Isomorphisms Of Polynomials," Future Generation Computer Systems, Volume 30, January, 2014, Pages 91-97. (ID#:14-1827) Available at: http://dl.acm.org/citation.cfm?id=2562354.2562819&coll=DL&dl=GUIDE&CFID=511224941&CFTOKEN=63502203 This paper proposes a proxy signature scheme based on the Isomorphism of Polynomials (IP) challenge, under the umbrella of Multivariate Public Key Cryptography (MPKC). This signature scheme would ideally be able to resist projected quantum computing attacks, a particularly constructive gain in understanding provable security for MPKCs. Keywords: Isomorphism of Polynomials, Multivariate Public Key Cryptography, Post-Quantum Cryptography, Provable security, Proxy signature
  • Osterweil, E.; Massey, D.; McPherson, D.; Lixia Zhang, "Verifying Keys through Publicity and Communities of Trust: Quantifying Off-Axis Corroboration," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.283,291, Feb. 2014. (ID#:14-1828) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6550862&isnumber=6689796 The DNS Security Extensions (DNSSEC) arguably make DNS the first core Internet system to be protected using public key cryptography. The success of DNSSEC not only protects the DNS, but has generated interest in using this secured global database for new services such as those proposed by the IETF DANE working group. However, continued success is only possible if several important operational issues can be addressed. For example, .gov and .arpa have already suffered misconfigurations where DNS continued to function properly, but DNSSEC failed (thus, orphaning their entire subtrees in DNSSEC). Internet-scale verification systems must tolerate this type of chaos, but what kind of verification can one derive for systems with dynamism like this? In this paper, we propose to achieve robust verification with a new theoretical model, called Public Data, which treats operational deployments as Communities of Trust (CoTs) and makes them the verification substrate. Using a realization of the above idea, called Vantages, we quantitatively show that using a reasonable DNSSEC deployment model and a typical choice of a CoT, an adversary would need to be able to have visibility into and perform on-path Man-in-the-Middle (MitM) attacks on arbitrary traffic into and out of up to 90 percent of the all of the Autonomous Systems (ASes) in the Internet before having even a 10 percent chance of spoofing a DNSKEY. Further, our limited deployment of Vantages has outperformed the verifiability of DNSSEC and has properly validated its data up to 99.5 percent of the time. Keywords: Internet; public key cryptography; trusted computing;.arpa; .gov; AS;CoT; DNS security extensions; DNSKEY spoofing; DNSSEC deployment model; IETF DANE working group; Internet-scale verification systems; MitM; Vantages; autonomous systems; communities of trust; core Internet system; key verification; man-in-the-middle attacks; off-axis corroboration; public data; public key cryptography; secured global database; DNDKEY; DNSSEC; p2p; verification
  • Young-Chang Hou; Shih-Chieh Wei; Chia-Yin Lin, "Random-Grid-Based Visual Cryptography Schemes," Circuits and Systems for Video Technology, IEEE Transactions on , vol.24, no.5, pp.733,744, May 2014. (ID#:14-1829) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587735&isnumber=6809234 This paper discusses a random-grid-based nonexpanded visual cryptography scheme for generating both meaningful and noise-like shares. First, the distribution of black pixels on the share images and the stack image is analyzed. A probability allocation method is then proposed that is capable of producing the best contrast in both the share images and the stack image. With our method, not only can different cover images be used to hide the secret image, but the contrast can be adjusted as needed. The most important result is the improvement of the visual quality of both the share images and the stack image to their theoretical maximum. Our meaningful visual secret sharing method is shown in experiments to be superior to past methods. Keywords: cryptography ;image processing ;probability; black pixels; probability allocation method; random-grid-based nonexpanded visual cryptography scheme; secret image; share images; stack image; visual secret sharing method; Encryption; Image color analysis; Information management; Stacking; Visualization; Meaningful shares; random grid; secret sharing; visual cryptography
  • Portmann, C., "Key Recycling in Authentication," Information Theory, IEEE Transactions on , vol.60, no.7, pp.4383,4396, July 2014. (ID#:14-1830) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797875&isnumber=6832684 In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( varepsilon ) -secure, if ( varepsilon ) -almost strongly universal (_2) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( varepsilon ). We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication. Keywords: Abstracts; Authentication; Computational modeling; Cryptography; Protocols; Recycling; Cryptography; authentication; composable security; information-theoretic security
  • Singh, H.; Sachdev, A, "The Quantum way of Cloud Computing," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.397,400, 6-8 Feb. 2014. (ID#:14-1831) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798362&isnumber=6798279 Quantum Computing and Cloud Computing are the technologies which have the capability to shape the future of computing. Quantum computing focuses on creating super-fast computers using the concepts of quantum physics whereas Cloud computing allows the computing power to be provided as a service. This paper presents a theoretical approach towards the possibility of a Quantum-Cloud i.e. quantum computing as a service. This will combine the fields of quantum computing and cloud computing, resulting into an evolutionary technology. Also, this paper discusses the possible advantages of this in the near future. Keywords: cloud computing; quantum computing; cloud computing; quantum computing; super-fast computers; Cryptography; Hardware; Quantum computing; Cloud Computing; Quantum Cloud; Quantum Computing; Qubit
  • Kishore, N.; Kapoor, B., "An efficient parallel algorithm for hash computation in security and forensics applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. (ID#:14-1832) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283 Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine. Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organization; parallel algorithms; probability; OpenMP API;SHA-1 algorithm;c ollision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; lgorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP; SHA-1
  • Ferretti, L.; Colajanni, M.; Marchetti, M., "Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.437,446, Feb. 2014. (ID#:14-1833) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6522403&isnumber=6689796 Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies. Keywords: cloud computing; cryptography; database management systems; TPC-C standard benchmark; availability property; cloud database services; concurrent access; data confidentiality; database structure modification; distributed access; elasticity property; encrypted cloud database; encrypted data concurrent operation execution; geographically distributed clients; independent access; intermediate proxies elimination; network latencies; scalability property; Cloud; SecureDBaaS; confidentiality; database; security
  • Alahmadi, A; Abdelhakim, M.; Jian Ren; Tongtong Li, "Defense Against Primary User Emulation Attacks in Cognitive Radio Networks Using Advanced Encryption Standard," Information Forensics and Security, IEEE Transactions on , vol.9, no.5, pp.772,781, May 2014. (ID#:14-1834) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6763060&isnumber=6776454 This paper considers primary user emulation attacks in cognitive radio networks operating in the white spaces of the digital TV (DTV) band. We propose a reliable AES-assisted DTV scheme, in which an AES-encrypted reference signal is generated at the TV transmitter and used as the sync bits of the DTV data frames. By allowing a shared secret between the transmitter and the receiver, the reference signal can be regenerated at the receiver and used to achieve accurate identification of the authorized primary users. In addition, when combined with the analysis on the autocorrelation of the received signal, the presence of the malicious user can be detected accurately whether or not the primary user is present. We analyze the effectiveness of the proposed approach through both theoretical analysis and simulation examples. It is shown that with the AES-assisted DTV scheme, the primary user, as well as malicious user, can be detected with high accuracy under primary user emulation attacks. It should be emphasized that the proposed scheme requires no changes in hardware or system structure except for a plug-in AES chip. Potentially, it can be applied directly to today's DTV system under primary user emulation attacks for more efficient spectrum sharing. Keywords: cognitive radio; correlation methods; public key cryptography; AES-encrypted reference signal; DTV data frames; TV transmitter; advanced encryption standard; authorized primary users; autocorrelation; cognitive radio networks; digital TV band; malicious user; plug-in chip;primary user emulation attacks; reference signal; spectrum sharing; sync bits; white spaces; Digital TV; Emulation; Manganese; Random variables; Receivers; Synchronization; Transmitters; Network security; dynamic spectrum access (DSA);eight-level vestigial sideband (8-VSB);primary user emulation attacks (PUEA);secure spectrum sensing
  • Hu, Pengfei; Xing, Kai; Cheng, Xiuzhen; Wei, Hao; Zhu, Haojin, "Information leaks out: Attacks and countermeasures on compressive data gathering in wireless sensor networks," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1258,1266, April 27 2014-May 2 2014. (ID#:14-1835) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848058&isnumber=6847911 Compressive sensing (CS) has been viewed as a promising technology to greatly improve the communication efficiency of data gathering in wireless sensor networks. However, this new data collection paradigm may bring in new threats but few study has paid attention to prevent information leakage during compressive data gathering. In this paper, we identify two statistical inference attacks and demonstrate that traditional compressive data gathering may suffer from serious information leakage under these attacks. In our theoretical analysis, we quantitatively analyze the estimation error of compressive data gathering through extensive statistical analysis, based on which we propose a new secure compressive data aggregation scheme by adaptively changing the measurement coefficients at each sensor and correspondingly at the sink without the need of time synchronization. In our analysis, we show that the proposed scheme could significantly improve data confidentiality at light computational and communication overhead. Keywords: Compressed sensing; Conferences; Cryptography; matching pursuit algorithms; Vectors; Wireless sensor networks

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Threat Vectors

Threat Vectors


As systems become larger and more complex, the surface that hackers can attack also grows. Is this set of recent research articles, topics are explored that include smartphone malware, zero-day polymorphic worm detection, source identification, drive-by download attacks, two-factor face authentication, semantic security, and code structures.

  • Peng, Sancheng; Yu, Shui; Yang, Aimin, "Smartphone Malware and Its Propagation Modeling: A Survey," Communications Surveys & Tutorials, IEEE , vol.16, no.2, pp.925,941, Second Quarter 2014. (ID#:14-1459) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6563277&isnumber=6811383 Smartphones are pervasively used in society, and have been both the target and victim of malware writers. Motivated by the significant threat that presents to legitimate users, we survey the current smartphone malware status and their propagation models. The content of this paper is presented in two parts. In the first part, we review the short history of mobile malware evolution since 2004, and then list the classes of mobile malware and their infection vectors. At the end of the first part, we enumerate the possible damage caused by smartphone malware. In the second part, we focus on smartphone malware propagation modeling. In order to understand the propagation behavior of smartphone malware, we recall generic epidemic models as a foundation for further exploration. We then extensively survey the smartphone malware propagation models. At the end of this paper, we highlight issues of the current smartphone malware propagation models and discuss possible future trends based on our understanding of this topic. Keywords: Bluetooth; Grippers; Mobile communication; Mobile handsets; Software; Trojan horses; mobile malware; propagation modeling; simulator; smartphone
  • Kaur, R.; Singh, M., "A Survey on Zero-Day Polymorphic Worm Detection Techniques," Communications Surveys & Tutorials, IEEE , vol.PP, no.99, pp.1,30 14 March 2014. (ID#:14-1460) Available at:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766917&isnumber=5451756 Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms. Keywords: Grippers; Internet; Malware; Monitoring; Payloads; Vectors; Detection Systems; Polymorphic worms; Signature Generation; Zero-day attacks; Zero-day malware
  • Murvay, P.-S.; Groza, B., "Source Identification Using Signal Characteristics in Controller Area Networks," Signal Processing Letters, IEEE , vol.21, no.4, pp.395,399, April 2014 (ID#:14-1461) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730667&isnumber=6732989 The CAN (Controller Area Network) bus, i.e., the de facto standard for connecting ECUs inside cars, is increasingly becoming exposed to some of the most sophisticated security threats. Due to its broadcast nature and ID oriented communication, each node is sightless in regards to the source of the received messages and assuring source identification is an uneasy challenge. While recent research has focused on devising security in CAN networks by the use of cryptography at the protocol layer, such solutions are not always an alternative due to increased communication and computational overheads, not to mention backward compatibility issues. In this work we set steps for a distinct approach, namely, we try to take authentication up to unique physical characteristics of the frames that are placed by each node on the bus. For this we analyze the frames by taking measurements of the voltage, filtering the signal and examining mean square errors and convolutions in order to uniquely identify each potential sender. Our experimental results show that distinguishing between certain nodes is clearly possible and by clever choices of transceivers and frame IDs each message can be precisely linked to its sender. Keywords: controller area networks; convolution; cryptography ;filtering theory; mean square error methods; transceivers; CAN networks; ID oriented communication; communication overhead; computational overhead; controller area networks; convolution; cryptography; mean square errors; protocol layer; security threats; signal filtering; source identification; transceivers; Authentication; Convolution; Cryptography; Physical layer; Transceivers; Vectors; CAN bus; physical fingerprinting; source identification
  • Gaya K. Jayasinghe, J. Shane Culpepper, Peter Bertok, "Efficient and Effective Realtime Prediction Of Drive-By Download Attacks," Journal of Network and Computer Applications, Volume 38, February, 2014, ( Pages 135-149). (ID#:14-1462) Available at:http://dl.acm.org/citation.cfm?id=2567003.2567230&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This article recognizes the flaws on current mitigation techniques for drive-by download attacks, techniques which are constrained to static and semi-dynamic analysis and are vulnerable to evasion methods. The authors of this paper present an original drive-by downloading detection method that minimizes the resource drain other methods have previously required. This proposed method operates by inspecting the bytecode stream for web browsers at runtime. Keywords: Anomaly detection, Drive-by downloads, Dynamic analysis, Machine learning, Web client exploits
  • Andrew F. Tappenden, James Miller, "Automated Cookie Collection Testing," ACM Transactions on Software Engineering and Methodology (TOSEM) Volume 23 Issue 1, February 2014, Article No. 3. (ID#:14-1463) Available at: http://dl.acm.org/citation.cfm?id=2582050.2559936&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 Cookies are used by over 80% of Web applications utilizing dynamic Web application frameworks. Applications deploying cookies must be rigorously verified to ensure that the application is robust and secure. Given the intense time-to-market pressures faced by modern Web applications, testing strategies that are low cost and automatable are required. Automated Cookie Collection Testing (CCT) is presented, and is empirically demonstrated to be a low-cost and highly effective automated testing solution for modern Web applications. Automatable test oracles and evaluation metrics specifically designed for Web applications are presented, and are shown to be significant diagnostic tests. Automated CCT is shown to detect faults within five real-world Web applications. A case study of over 580 test results for a single application is presented demonstrating that automated CCT is an effective testing strategy. Moreover, CCT is found to detect security bugs in a Web application released into full production. Keywords: Cookies, Web application testing, adaptive random testing, automated testing, software testing, test generation, test strategies
  • Christos Kalloniatis, Haralambos Mouratidis, Manousakis Vassilis, Shareeful Islam, Stefanos Gritzalis, Evangelia Kavakli, " Towards the Design Of Secure And Privacy-Oriented Information Systems In The Cloud: Identifying The Major Concepts," Computer Standards & Interfaces, Volume 36 Issue 4, June, 2014, (Pages 759-775). (ID#:14-1464) Available at: http://dl.acm.org/citation.cfm?id=2588915.2589310&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper discusses the imperative nature of fully understanding the security challenges of cloud environments, as well as how cloud architecture differs from common distributed systems. Comprehensive consideration of all threats, best practices, and security measures supports the design of a secure cloud system. Keywords: Cloud computing, Concepts, Privacy, Requirements, Security, Security and Privacy Issues
  • Jeonil Kang, Daehun Nyang, Kyunghee Lee, "Two-factor Face Authentication Using Matrix Permutation Transformation and a User Password," Information Sciences: an International Journal, Volume 269, June, 2014, (Pages 1-20). (ID#:14-1465) Available at: http://dl.acm.org/citation.cfm?id=2598931.2599012&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This article sheds light on the use of biometrics for authentication, and accompanying challenges such as inevitable inconsistencies in bio-information per authentication attempt. The authors of this paper suggest a two-factor face authentication, in which matrix transformations and a password are integrated. Possible attacks, suggestions for bolstered security, and experimental results are discussed. Keywords: Biometrics security, Face authentication, User privacy
  • Ruixuan Li, Zhiyong Xu, Wanshang Kang, Kin Choong Yow, Cheng-Zhong Xu, "Efficient Multi-Keyword Ranked Query Over Encrypted Data in Cloud Computing," Future Generation Computer Systems, Volume 30, January, 2014, (Pages 179-190). (ID#:14-1466) Available at: http://dl.acm.org/citation.cfm?id=2562354.2562799&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This article considers the accessibility obstacles for secure cloud storage, particularly challenges in applying keyword-based queries and result-ranking on the encrypted data. As a solution to the aforementioned difficulties, this paper presents a flexible multi-keyword query scheme (MKQE). MKQE decreases maintenance overhead for a dynamic keyword dictionary, and considers user access history. Experimental process and results are discussed. Keywords: Cloud computing, Data encryption, Multi-keyword query, Privacy preserving, Ranked query, Top-k query
  • Abdul Razzaq, Khalid Latif, H. Farooq Ahmad, Ali Hur, Zahid Anwar, Peter Charles Bloodsworth, "Semantic Security Against Web Application Attacks," Information Sciences: an International Journal, Volume 254, January, 2014, (Pages 19-38). (ID#:14-1467) Available at: http://dl.acm.org/citation.cfm?id=2535053.2535251&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper presents a web application detection and classification method, relying on ontology-based techniques in lieu of traditional signature-based methodology. Semantic rules used in this proposed method identifies application context, possible attacks, and protocol. Processes and experimental results for this fully platform-and-technology-independent method. Keywords: Application security, Semantic rule engine, Semantic security
  • Guillermo Suarez-Tangil, Juan E. Tapiador, Pedro Peris-Lopez, Jorge Blasco, "Dendroid: A Text Mining Approach to Analyzing and Classifying Code Structures In Android Malware Families," Expert Systems with Applications: An International Journal, Volume 41 Issue 4, March, 2014, (Pages 1104-1117). (ID#:14-1468) Available at: http://dl.acm.org/citation.cfm?id=2560969.2561397&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 The authors of this paper present Dendroid as a solution to help automate phases of the malware analysis process. With the widespread dependence on smartphones, it has become increasingly difficult to analyze the accompanying new strains of malicious apps. Dendroid utilizes text mining and information retrieval techniques to help identify similarities between malware specimen, which are then subject to automated classification and groupings. Process and results are discussed. Keywords: Android OS, Information retrieval, Malware analysis, Smartphones, Software similarity and classification, Text mining
  • Andrei Giurgiu, Rachid Guerraoui, Kevin Huguenin, Anne-Marie Kermarrec, "Computing in Social Networks," Information and Computation, Volume 234, February, 2014, (Pages 3-16). (ID#:14-1469) Available at: http://dl.acm.org/citation.cfm?id=2580115.2580402&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper discusses the challenge of S^3,Scalable Secure computing in a Social Network. A novel protocol is presented, which makes use of the social component of the network -- recognizing that nodes are conscious of their reputation and are careful of being isolated as untrusted. Keywords: Distributed computing, Privacy, Security, Social networks
  • Chun Guo, Yajian Zhou, Yuan Ping, Zhongkun Zhang, Guole Liu, Yixian Yang, "A Distance Sum-Based Hybrid Method for Intrusion Detection," Applied Intelligence, Volume 40 Issue 1, January 2014, (Pages 178-188). (ID#:14-1470) Available at: http://dl.acm.org/citation.cfm?id=2583608.2583622&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 The authors of this paper discuss the complexity and high cost of hybrid intrusion detection systems, and present distance sum-based vector machine (DSSVM), a novel hybrid learning method. Conducted tests verify the effectiveness of DSSVM as an intrusion detection model. Results are discussed. Keywords: Euclidean distance function, Hybrid classifiers, Intrusion detection, Pattern recognition, Support vector machine
  • Matthew Brown, Bo An, Christopher Kiekintveld, Fernando Ordonez, Milind Tambe, "An Extended Study On Multi-Objective Security Games ," Autonomous Agents and Multi-Agent Systems ,Volume 28 Issue 1, January 2014, (Pages 31-71). (ID#:14-1471) Available at: http://dl.acm.org/citation.cfm?id=2560802.2560820&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper addresses security games, derived from real mission-critical security operations against dynamic, malicious opponents. The difficulty lies in weighing many different factors when considering an appropriate security strategy. Considering this, the authors of this paper present multi-objective security games (MOSGs), which challenges the decision-maker to consider the opportunity costs between varying objectives. Results and methods are discussed. Keywords: Game theory, Multi-objective optimization, Security
  • Hien Thi Thu Truong, Eemil Lagerspetz, Petteri Nurmi, Adam J. Oliner, Sasu Tarkoma, N. Asokan, Sourav Bhattacharya, "The Company You Keep: Mobile Malware Infection Rates And Inexpensive Risk Indicators," Proceedings of the 23rd International Conference On World Wide Web, April 2014, (Pages 39-50). (ID#:14-1473) Available at: http://dl.acm.org/citation.cfm?id=2566486.2568046&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper recognizes the lack of public information about mobile malware infection rates, and introduces a pioneer independent study of malware infection rates. The authors of this paper hypothesized that advertising potentially malicious applications offered by some application stores may be considered as infection vectors. This technique is argued to be a counterpart for malware scanning, and process and results are discussed. Keywords: android, infection rate, malware detection, mobile malware
  • Chang Liu, Liehuang Zhu, Mingzhong Wang, Yu-An Tan, "Search Pattern Leakage In Searchable Encryption: Attacks And New Construction," Information Sciences: an International Journal,Volume 265, May, 2014, (Pages 176-188). (ID#:14-1474) Available at: http://dl.acm.org/citation.cfm?id=2580107.2580271&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 This paper addresses the challenges of searchable encryption, recognizing that several searchable encryption alternatives reveal information about user search. The authors of this paper present two attack methods that validate the inevitable consequence of leaking user search information, and propose a grouping-based construction (GBC) solution. Keywords: Cloud computing, Fake query, Grouping-based construction, Search pattern, Searchable encryption
  • John Criswell, Nathan Dautenhahn, Vikram Adve, "Virtual Ghost: Protecting Applications From Hostile Operating Systems," ASPLOS '14 Proceedings of the 19th International Conference On Architectural Support For Programming Languages And Operating Systems, March 2014, (Pages 81-96). (ID#:14-1475) Available at: http://dl.acm.org/citation.cfm?id=2541940.2541986&coll=DL&dl=GUIDE&CFID=343285328&CFTOKEN=39974052 Applications that process sensitive data can be carefully designed and validated to be difficult to attack, but they are usually run on monolithic, commodity operating systems, which may be less secure. An OS compromise gives the attacker complete access to all of an application's data, regardless of how well the application is built. We propose a new system, Virtual Ghost, that protects applications from a compromised or even hostile OS. Virtual Ghost is the first system to do so by combining compiler instrumentation and run-time checks on operating system code, which it uses to create ghost memory that the operating system cannot read or write. Virtual Ghost interposes a thin hardware abstraction layer between the kernel and the hardware that provides a set of operations that the kernel must use to manipulate hardware, and provides a few trusted services for secure applications such as ghost memory management, encryption and signing services, and key management. Unlike previous solutions, Virtual Ghost does not use a higher privilege level than the kernel. Virtual Ghost performs well compared to previous approaches; it outperforms InkTag on five out of seven of the LMBench microbenchmarks with improvements between 1.3x and 14.3x. For network downloads, Virtual Ghost experiences a 45% reduction in bandwidth at most for small files and nearly no reduction in bandwidth for large files and web traffic. An application we modified to use ghost memory shows a maximum additional overhead of 5% due to the Virtual Ghost protections. We also demonstrate Virtual Ghost's efficacy by showing how it defeats sophisticated rootkit attacks. Keywords: control-flow integrity, inlined reference monitors, malicious operating systems, software fault isolation, software security

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Virtual Machines

Virtual Machines



Arguably, virtual machines are more secure than actual machines. This idea is based on the notion that an attacker cannot jump the gap between the virtual and the actual. The growth of interest in cloud computing suggest it is time for a fresh look at the vulnerabilities in virtual machines. In the articles presented below, security concerns are addressed in some interesting ways. The articles cited below show how competition between I/O workloads could be exploited, describe a "gathering storm" for V/M security issues, and discuss digital forensics issues in the cloud.
  • Chiang, R.; Rajasekaran, S.; Zhang, N.; Huang, H., "Swiper: Exploiting Virtual Machine Vulnerability in Third-Party Clouds with Competition for I/O Resources," Parallel and Distributed Systems, IEEE Transactions on, vol.PP, no.99, pp.1,1, June 2014. (ID#:14-1836) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824231&isnumber=4359390 The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads - i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth - which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources. Keywords: Cloud computing; Delays; IP networks; Security; Synchronization; Throughput; Virtualization
  • Soni, G.; Kalra, M., "A Novel Approach For Load Balancing In Cloud Data Center," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.807,812, 21-22 Feb. 2014. (ID#:14-1837) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779427&isnumber=6779283 In a large-scale cloud computing environment the cloud data centers and end users are geographically distributed across the globe. The biggest challenge for cloud data centers is how to handle and service the millions of requests that are arriving very frequently from end users efficiently and correctly. In cloud computing, load balancing is required to distribute the dynamic workload evenly across all the nodes. Load balancing helps to achieve a high user satisfaction and resource utilization ratio by ensuring an efficient and fair allocation of every computing resource. Proper load balancing aids in minimizing resource consumption, implementing fail-over, enabling scalability, avoiding bottlenecks and over-provisioning etc. In this paper, we propose "Central Load Balancer" a load balancing algorithm to balance the load among virtual machines in cloud data center. Results show that our algorithm can achieve better load balancing in a large-scale cloud computing environment as compared to previous load balancing algorithms. Keywords: cloud computing; computer centers; resource allocation; virtual machines; central load balancer algorithm; cloud data center; dynamic workload; large-scale cloud computing environment; load balancing; resource allocation; resource utilization; virtual machines; Algorithm design and analysis; Cloud computing; Computational modeling; Heuristic algorithms; Load management; Resource management; Virtual machining; Cloud Data Center; CloudAnalyst; Live Virtual Machine Migration; Load balancing; Virtualization
  • Vijay Varadharajan, Udaya Tupakula, "Counteracting Security Attacks In Virtual Machines In The Cloud Using Property Based Attestation," Journal of Network and Computer Applications, Volume 40, April, 2014, (Pages 31-45). (ID#:14-1838) Available at: http://dl.acm.org/citation.cfm?id=2608850.2608932&coll=DL&dl=GUIDE&CFID=376780186&CFTOKEN=34932578 This paper expounds on the emergence of embedded Trusted Platform Modules in devices like PCs and smartphones. A trust-enhanced security model for cloud services is proposed, which aims to detect and prevent attacks, using trusted attestation methods. For this model, a multi-tenant virtualized system is considered, for which the proposed model will allow cloud service providers to certify certain tenant security properties. If a deviation from normal behavior for the tenant virtual machines occurs, such that it does not correspond with the certified properties, the model may dynamically isolate the suspicious cause. Keywords: Cloud, Malware, Rootkits, TPM attestation, Trusted computing, Virtual machine monitors, Zero day attacks
  • Gabor Pek, Andrea Lanzi, Abhinav Srivastava, Davide Balzarotti, Aurelien Francillon, Christoph Neumann, "On the Feasibility Of Software Attacks On Commodity Virtual Machine Monitors Via Direct Device Assignment," ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, (Pages 305-316). (ID#:14-1839) Available at: http://dl.acm.org/citation.cfm?id=2590296.2590299&coll=DL&dl=GUIDE&CFID=376780186&CFTOKEN=34932578 The security of virtual machine monitors (VMMs) is a challenging and active field of research. In particular, due to the increasing significance of hardware virtualization in cloud solutions, it is important to clearly understand existing and arising VMM-related threats. Unfortunately, there is still a lot of confusion around this topic as many attacks presented in the past have never been implemented in practice or tested in a realistic scenario. In this paper, we shed light on VM related threats and defenses by implementing, testing, and categorizing a wide range of known and unknown attacks based on directly assigned devices. We executed these attacks on an exhaustive set of VMM configurations to determine their potential impact. Our experiments suggest that most of the previously known attacks are ineffective in current VMM setups. We also developed an automatic tool, called PTFuzz, to discover hardware-level problems that affects current VMMs. By using PTFuzz, we found several cases of unexpected hardware behavior, and a major vulnerability on Intel platforms that potentially impacts a large set of machines used in the wild. These vulnerabilities affect unprivileged virtual machines that use a directly assigned device (e.g., network card) and have all the existing hardware protection mechanisms enabled. Such vulnerabilities either allow an attacker to generate a host-side interrupt or hardware faults, violating expected isolation properties. These can cause host software (e.g., VMM) halt as well as they might open the door for practical VMM exploitations. We believe that our study can help cloud providers and researchers to better understand the limitations of their current architectures to provide secure hardware virtualization and prepare for future attacks. Keywords: DMA attack, I/O virtualization, MMIO, PIO, interrupt attack, passthrough, virtual machine monitor
  • Fangzhou Yao, Read Sprabery, Roy H. Campbell, "CryptVMI: a Flexible And Encrypted Virtual Machine Introspection System In The Cloud," SCC '14 Proceedings of the 2nd international workshop on Security in cloud computing, June 2014, (Pages 11-18). (ID#:14-1840) Available at: http://dl.acm.org/citation.cfm?id=2600075.2600078&coll=DL&dl=GUIDE&CFID=376780186&CFTOKEN=34932578 Virtualization has demonstrated its importance in both public and private cloud computing solutions. In such environments, multiple virtual instances run on the same physical machine concurrently. Thus, the isolation in the system is not guaranteed by the physical infrastructure anymore. Reliance on logical isolation makes a system vulnerable to attacks. Thus, Virtual Machine Introspection techniques become essential, since they simplify the process to acquire evidence for further analysis in this complex system. However, Virtual Machine Introspection tools for the cloud are usually written specifically for a single system and do not provide a standard interface to work with other security monitoring systems. Moreover, this technique breaks down the borders of the segregation between multiple tenants, which should be avoided in a public cloud computing environment. In this paper, we focus on building a flexible and encrypted Virtual Machine Introspection system, CryptVMI, to address the above concerns. Our approach maintains a client application on the user end to send queries to the cloud, as well as parse the results returned in a standard form. We also have a handler that cooperates with an introspection application in the cloud infrastructure to process queries and return encrypted results. This work shows our design and implementation of this system, and the benchmark results prove that it does not incur much performance overhead. Keywords: cloud computing, confidentiality, virtual machine introspection, virtualization
  • Junghwan Rhee; Riley, R.; Zhiqiang Lin; Xuxian Jiang; Dongyan Xu, "Data-Centric OS Kernel Malware Characterization," Information Forensics and Security, IEEE Transactions on , vol.9, no.1, pp.72,87, Jan. 2014. (ID#:14-1842) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671356&isnumber=6684617 Traditional malware detection and analysis approaches have been focusing on code-centric aspects of malicious programs, such as detection of the injection of malicious code or matching malicious code sequences. However, modern malware has been employing advanced strategies, such as reusing legitimate code or obfuscating malware code to circumvent the detection. As a new perspective to complement code-centric approaches, we propose a data-centric OS kernel malware characterization architecture that detects and characterizes malware attacks based on the properties of data objects manipulated during the attacks. This framework consists of two system components with novel features: First, a runtime kernel object mapping system which has an un-tampered view of kernel data objects resistant to manipulation by malware. This view is effective at detecting a class of malware that hides dynamic data objects. Second, this framework consists of a new kernel malware detection approach that generates malware signatures based on the data access patterns specific to malware attacks. This approach has an extended coverage that detects not only the malware with the signatures, but also the malware variants that share the attack patterns by modeling the low level data access behaviors as signatures. Our experiments against a variety of real-world kernel rootkits demonstrate the effectiveness of data-centric malware signatures. Keywords: data encapsulation; digital signatures; invasive software; operating system kernels; attack patterns; c ode-centric approach; data access patterns; data object manipulation; data-centric OS kernel malware characterization architecture; dynamic data object hiding; low level data access behavior modeling; malware attack characterization; malware signatures; real-world kernel rootkits; runtime kernel object mapping system; Data structures; Dynamic scheduling; Kernel; Malware; Monitoring; Resource management; Runtime; OS kernel malware characterization; data-centric malware analysis; virtual machine monitor
  • Nikolai, J.; Yong Wang, "Hypervisor-based Cloud Intrusion Detection System," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.989,993, 3-6 Feb. 2014. (ID#:14-1843) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785472&isnumber=6785290 Shared resources are an essential part of cloud computing. Virtualization and multi-tenancy provide a number of advantages for increasing resource utilization and for providing on demand elasticity. However, these cloud features also raise many security concerns related to cloud computing resources. In this paper, we propose an architecture and approach for leveraging the virtualization technology at the core of cloud computing to perform intrusion detection security using hypervisor performance metrics. Through the use of virtual machine performance metrics gathered from hypervisors, such as packets transmitted/received, block device read/write requests, and CPU utilization, we demonstrate and verify that suspicious activities can be profiled without detailed knowledge of the operating system running within the virtual machines. The proposed hypervisor-based cloud intrusion detection system does not require additional software installed in virtual machines and has many advantages compared to host-based and network based intrusion detection systems which can complement these traditional approaches to intrusion detection. Keywords: cloud computing; computer network security; software architecture; software metrics; virtual machines; virtualization; CPU utilization; block device read requests; block device write requests; cloud computing resources; cloud features; hypervisor performance metrics; hypervisor-based cloud intrusion detection system; intrusion detection security; ultitenancy; operating system; packet transmission; received packets; shared resource utilization; virtual machine performance metrics; virtualization; virtualization technology; Cloud computing; Computer crime; Intrusion detection; Measurement; Virtual machine monitors; Virtual machining; Cloud Computing ;hypervisor; intrusion detection
  • Thethi, N.; Keane, A., "Digital Forensics Investigations in the Cloud," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.1475,1480, 21-22 Feb. 2014. (ID#:14-1844) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779543&isnumber=6779283 The essentially infinite storage space offered by Cloud Computing is quickly becoming a problem for forensics investigators in regards to evidence acquisition, forensic imaging and extended time for data analysis. It is apparent that the amount of stored data will at some point become impossible to practically image for the forensic investigators to complete a full investigation. In this paper, we address these issues by determining the relationship between acquisition times on the different storage capacities, using remote acquisition to obtain data from virtual machines in the cloud. A hypothetical case study is used to investigate the importance of using a partial and full approach for acquisition of data from the cloud and to determine how each approach affects the duration and accuracy of the forensics investigation and outcome. Our results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet. Performing the acquisition using cloud resources showed a considerable reduction in time when compared to the conventional imaging method. For a 30GB storage volume, the least time was recorded for the snapshot functionality of the cloud and dd command. The time using this method is reduced by almost 77 percent. FTK Remote Agent proved to be most efficient showing an almost 12 percent reduction in time over other methods of acquisition. Furthermore, the timelines produced with the help of the case study, showed that the hybrid approach should be preferred to complete approach for performing acquisition from the cloud, especially in time critical scenarios. Keywords: cloud computing; data analysis; digital forensics; operating systems (computers);virtual machines; FTK remote agent; cloud computing; data analysis; digital forensics investigations; evidence acquisition; extended time; forensic imaging; image acquisition; remote acquisition; storage capacities; virtual machines; Cloud computing; Conferences; Digital forensics; Imaging; Virtual machining; Cloud evidence acquisition; Cloud forensics
  • Sheng-Wei Lee; Fang Yu, "Securing KVM-Based Cloud Systems via Virtualization Introspection," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5028,5037, 6-9 Jan. 2014. (ID#:14-1845) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759220&isnumber=6758592 Linux Kernel Virtual Machine (KVM) is one of the most commonly deployed hypervisor drivers in the IaaS layer of cloud computing ecosystems. The hypervisor provides a full-virtualization environment that intends to virtualize as much hardware and systems as possible, including CPUs, network interfaces and chipsets. With KVM, heterogeneous operating systems can be installed in Virtual Machines (VMs) in an homogeneous environment. However, it has been shown that various breaches due to software defects may cause damages on such a cloud ecosystem. We propose a new Virtualization Introspection System (VIS) to protect the host as well as VMs running on a KVM-based cloud structure from malicious attacks. VIS detects and intercepts attacks from VMs by collecting their static and dynamic status. We then replay the attacks on VMs and leverage artificial intelligence techniques to derive effective decision rules with unsupervised learning nature. The preliminary result shows the promise of the presented approach against several modern attacks on CVE-based vulnerabilities. Keywords: Linux; cloud computing; computer network security; device drivers; operating system kernels; unsupervised learning; virtual machines; virtualization; CVE-based vulnerabilities; IaaS layer; KVM-based cloud structure; KVM-based cloud system security ;Linux kernel virtual machine; artificial intelligence techniques; cloud computing ecosystems; cloud ecosystem; decision rules; dynamic status; full-virtualization environment; heterogeneous operating systems; homogeneous environment; hypervisor drivers; malicious attacks; software defects; static status; unsupervised learning; virtualization introspection system; Analytical models; Computer hacking; Monitoring; Software; Virtual machine monitors; Virtualization; GHSOM; cloud systems; monitor; security; virtualization
  • Datta, E.; Goyal, N., "Security Attack Mitigation Framework For The Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual , vol., no., pp.1,6, 27-30 Jan. 2014. (ID#:14-1846) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433 Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow - hem to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions. Keywords: Markov processes; cloud computing; security of data; virtualization; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security; Silicon; Virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration
  • Wang, L.; Kalbarczyk, Z.; Iyer, R.; Iyengar, A., "VM-mCheckpoint: Design, Modeling, and Assessment of Lightweight In-Memory VM Checkpointing," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, June 2014. (ID#:14-1847) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824750&isnumber=4358699 Checkpointing and rollback techniques enhance reliability and availability of virtual machines and their hosted IT services. This paper proposes VM-mCheckpoint, a lightweight pure-software mechanism for high-frequency checkpointing and rapid recovery for VMs. Compared with existing techniques of VM checkpointing, VM-mCheckpoint tries to minimize checkpoint overhead and speed up recovery by means of copy-on-write, dirty-page prediction and in-place recovery, as well as saving incremental checkpoints in volatile memory. Moreover, VM- mCheckpoint deals with the issue that latency in error detection potentially results in corrupted checkpoints, particularly when checkpointing frequency is high. We also constructed Markov models to study the availability improvements provided by VM-mCheckpoint (from 99% to 99.98% on reasonably reliable hypervisors). We designed and implemented VM-mCheckpoint in the Xen VMM. The evaluation results demonstrate that VM-mCheckpoint incurs an average of 6.3% overhead (in terms of program execution time) for 50ms checkpoint intervals when executing the SPEC CINT 2006 benchmark. Error injection experiments demonstrate that VM-mCheckpoint, combined with error detection techniques in RMK, provides high coverage of recovery. Keywords: Availability; Checkpointing; Computer crashes; Pins; Transient analysis; Virtual machine monitors
  • Bakshi, Kapil, "Secure Hybrid Cloud Computing: Approaches And Use Cases," Aerospace Conference, 2014 IEEE , vol., no., pp.1,8, 1-8 March 2014. (ID#:14-1848) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836198&isnumber=6836156 Hybrid cloud is defined as a cloud infrastructure composed of two or more cloud infrastructures (private, public, and community clouds) that remain unique entities, but are bound together via technologies and approaches for the purposes of application and data portability. This paper will review a novel approach for implementing a secure hybrid cloud. Keywords: Cloud computing; Computer architecture; Switches; Virtual machine monitors; Virtual machining
  • Guenane, Fouad; Boujezza, Hajer; Nogueira, Michele; Pujolle, Guy, "An Architecture To Manage Performance And Reliability On Hybrid Cloud-Based Firewalling," Network Operations and Management Symposium (NOMS), 2014 IEEE , vol., no., pp.1,5, 5-9 May 2014. (ID#:14-1849) Available at:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838334&isnumber=6838210 Firewalls are the first defense line for the networking services and applications. With the advent of virtualization and Cloud Computing, the explosive growth of network-based services, investigations have emphasized the limitations of conventional firewalls. However, despite of being impressively significant to improve security, cloud-based firewalling approaches still experience severe performance and reliability issues that can lead to non use of these services by companies. Hence, our work presents an efficient architecture to manage performance and reliability on a hybrid cloud-based firewalling service. Being composed of a physical and a virtual part, the architecture follows an approach that supports and complements basic physical firewall functionalities with virtual ones. The architecture was deployed and experimental results show that the proposed approach improve the computational power of traditional firewall with the support of cloud-based firewalling service. Keywords: Authentication; Cloud computing; Computer architecture; Firewalls (computing); Monitoring; Virtual machining; Firewall; Network security; Secaas; Security as a Service
  • Himmel, M.A.; Grossman, F., "Security on Distributed Systems: Cloud Security Versus Traditional IT," IBM Journal of Research and Development , vol.58, no.1, pp.3:1,3:13, Jan.-Feb. 2014. (ID#:14-1850) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717051&isnumber=6717043 Cloud computing is a popular subject across the IT (information technology) industry, but many risks associated with this relatively new delivery model are not yet fully understood. In this paper, we use a qualitative approach to gain insight into the vectors that contribute to cloud computing risks in the areas of security, business, and compliance. The focus is on the identification of risk vectors affecting cloud computing services and the creation of a framework that can help IT managers in their cloud adoption process and risk mitigation strategy. Economic pressures on businesses are creating a demand for an alternative delivery model that can provide flexible payments, dramatic cuts in capital investment, and reductions in operational cost. Cloud computing is positioned to take advantage of these economic pressures with low-cost IT services and a flexible payment model, but with certain security and privacy risks. The frameworks offered by this paper may assist IT professionals obtain a clearer understanding of the risk tradeoffs associated with cloud computing environments. Keywords: Automation; Cloud computing; Computer security; Information technology; Risk management; Virtual machine monitors
  • Mapp, Glenford; Aiash, Mahdi; Ondiege, Brian; Clarke, Malcolm, "Exploring a New Security Framework for Cloud Storage Using Capabilities," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on , vol., no., pp.484,489, 7-11 April 2014. (ID#:14-1851) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830953&isnumber=6825948 We are seeing the deployment of new types of networks such as sensor networks for environmental and infrastructural monitoring, social networks such as facebook, and e-Health networks for patient monitoring. These networks are producing large amounts of data that need to be stored, processed and analysed. Cloud technology is being used to meet these challenges. However, a key issue is how to provide security for data stored in the Cloud. This paper addresses this issue in two ways. It first proposes a new security framework for Cloud security which deals with all the major system entities. Secondly, it introduces a Capability ID system based on modified IPv6 addressing which can be used to implement a security framework for Cloud storage. The paper then shows how these techniques are being used to build an e-Health system for patient monitoring. Keywords: Cloud computing; Companies; Monitoring; Protocols; Security; Servers; Virtual machine monitors; Capability Systems; Cloud Storage; Security Framework; e-Health Monitoring
  • Lin, Ying-Dar; Lee, Chia-Yin; Wu, Yu-Sung; Ho, Pei-Hsiu; Wang, Fu-Yu; Tsai, Yi-Lang, "Active versus Passive Malware Collection," Computer , vol.47, no.4, pp.59,65, Apr. 2014. (ID#:14-1852) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6544525&isnumber=6798540 An exploration of active and passive malware honeypots reveals that the two systems yield vastly different malware collections and that peer-to-peer file sharing is an important, but often overlooked, malware source. Keywords: Databases; Malware; Peer-to-peer computing; Telecommunication traffic; Trojan horses; Virtual machining; honeypots; malware collection and detection; network security; network vulnerability
  • Elwell, Jesse; Riley, Ryan; Abu-Ghazaleh, Nael; Ponomarev, Dmitry, "A Non-Inclusive Memory Permissions Architecture For Protection Against Cross-Layer Attacks," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on , vol., no., pp.201,212, 15-19 Feb. 2014. (ID#:14-1853) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835931&isnumber=6835920 Protecting modern computer systems and complex software stacks against the growing range of possible attacks is becoming increasingly difficult. The architecture of modern commodity systems allows attackers to subvert privileged system software often using a single exploit. Once the system is compromised, inclusive permissions used by current architectures and operating systems easily allow a compromised high-privileged software layer to perform arbitrary malicious activities, even on behalf of other software layers. This paper presents a hardware-supported page permission scheme for the physical pages that is based on the concept of non-inclusive sets of memory permissions for different layers of system software such as hypervisors, operating systems, and user-level applications. Instead of viewing privilege levels as an ordered hierarchy with each successive level being more privileged, we view them as distinct levels each with its own set of permissions. Such a permission mechanism, implemented as part of a processor architecture, provides a common framework for defending against a range of recent attacks. We demonstrate that such a protection can be achieved with negligible performance overhead, low hardware complexity and minimal changes to the commodity OS and hypervisor code. Keywords: Hardware; Memory management; Permission; System software; Virtual machine monitors
  • Weng, C.; Zhan, j.; Luo, Y., "TSAC: Enforcing Isolation of Virtual Machines in Clouds," Computers, IEEE Transactions on, vol. PP, no.99, pp.1,1, May 2014. (ID#:14-1854) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812169&isnumber=4358213 Virtualization plays a vital role in building the infrastructure of Clouds, and isolation is considered as one of its important features. However, we demonstrate with practical measurements that there exist two kinds of isolation problems in current virtualized systems, due to cache interference in a multi-core processor. That is, one virtual machine could degrade the performance or obtain the load information of another virtual machine, which running on a same physical machine. Then we present a time-sensitive contention management approach (TSAC) for allocating resources dynamically in the virtual machine monitor, in which virtual machines are controlled to share some physical resources (e.g., CPU or page color) in a dynamical manner, in order to enforce isolation between the virtual machines without sacrificing performance of the virtualized system. We have implemented a working prototype based on Xen, evaluated the implemented prototype with experiments, and experimental results show that TSAC could significantly improve isolation of virtualization. Specifically, compared to the default Xen, TSAC could improve the performance of the victim virtual machine by up to about 78%, and perform well in blocking its cache-based load information leakage. Keywords: Access control; Central Processing Unit; Operating systems; Resource management; Virtual machine monitors; Virtual machining; Virtualization
  • Aiash, Mahdi; Mapp, Glenford; Gemikonakli, Orhan, "Secure Live Virtual Machines Migration: Issues and Solutions," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.160,165, 13-16 May 2014. (ID#:14-1855) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844631&isnumber=6844560 In recent years, there has been a huge trend towards running network intensive applications, such as Internet servers and Cloud-based service in virtual environment, where multiple virtual machines (VMs) running on the same machine share the machine's physical and network resources. In such environment, the virtual machine monitor (VMM) virtualizes the machine's resources in terms of CPU, memory, storage, network and I/O devices to allow multiple operating systems running in different VMs to operate and access the network concurrently. A key feature of virtualization is live migration (LM) that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Live migration facilitates workload balancing, fault tolerance, online system maintenance, consolidation of virtual machines etc. However, live migration is still in an early stage of implementation and its security is yet to be evaluated. The security concern of live migration is a major factor for its adoption by the IT industry. Therefore, this paper uses the X.805 security standard to investigate attacks on live virtual machine migration. The analysis highlights the main source of threats and suggests approaches to tackle them. The paper also surveys and compares different proposals in the literature to secure the live migration. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Spotlight on Current Lablet Activities

Spotlight on Current Lablet Activities


Critical cyber systems must inspire trust and confidence, protect the privacy and integrity of data resources, and perform reliably. To tackle the ongoing challenges of securing tomorrow's systems we must develop the scientific underpinnings of security to understand what is possible in that domain as well as develop a collaborative community of researchers from government, industry and academia.

As part of that effort NSA began funding academic "Lablets" focused on the development of a Science of Security (SoS) and a broad, self-sustaining community effort to advance it. A major goal is the creation of a unified body of knowledge that can serve as the basis of a trust engineering discipline, curriculum, and rigorous design methodologies. The results of SoS Lablet research will be extensively documented and widely distributed through the SoS Virtual Organization. The intention is for the SoS VO to be our primary resource for describing Lablet research, and for creating a broad community effort to advance security science.

Currently Funded Research Lablets:

  • Carnegie Mellon University
  • North Carolina State University
  • University of Illinois at Urbana-Champaign
  • University of Maryland

The following sections provide some brief summaries of Lablet activities during the April through June 2014 time period.

(ID#:14-2282)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


CMU Lablet Recent Activities

CMU Lablet Activities


The following is a brief summary of recent activities by the CMU Lablet as reported in the SoS Quarterly Summary Report.

Fundamental Research

  • Carnegie Mellon University has conducted several notable studies involving novel analysis techniques for early problem detection, as well as dynamic security assurance. Among these include a technique based on patterns and predetermined requirements intended to discover possible security flaws in the early design stages. CMU has also seen development of a stochastic algorithm to help with reasoning in large planning problems, a program logic that navigates attacker-implemented code, a technique to enforce security constraints at runtime, and a dynamic analysis technique for detecting data races at runtime.

Community Interaction

  • In terms of community engagement, CMU hosted the CASOS Summer Institute, which serves to make familiar and put into practice concepts of network analytics. Following the Institute, CMU invited guests from four universities, subcontractors, and government organizations for the first Lablet Community Quarterly Meeting, which included workshop sessions and discussions centered around advancing the scientific process of cybersecurity research. The 10th Symposium on Usable Privacy and Security was held, chaired by a notable CMU faculty member.

Educational

  • Carnegie Mellon is making advances in both undergraduate and graduate education. The Institute for Software Research (ISR) at Carnegie Mellon now offers Masters degrees in Privacy Engineering, while at the undergraduate level, software engineering courses are being revamped, and topics such as security, data analysis, and developer studies are seeing changes. PhD students' shifts to including more experimental and data-focused approaches in their work has prompted the university to recognize and reflect such shifts in the core graduate curriculum.

For more information about CMU Lablet activities go to Carnegie Mellon University


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


NC State Lablet Recent Activities

NC State Lablet Activities


The following is a brief summary of recent activities by the North Carolina State University Lablet as reported in the SoS Quarterly Summary Report.

Fundamental Research

  • NCSU has presented several significant research efforts, which feature developments in understanding Resilient Architectures, mental models of varyingly skilled computer users, preventing phishing attacks through Google Chrome extension, and the human errors in open-source software. Recommendations were made for enforcing policies on network traffic in large networks, while a particular study on smart isolation strove to understand the principles and limitations of isolation and existing isolation techniques.

Community Interaction

  • NCSU facilitated various workshops, including a kick-off workshop for the International Research Network for the Science of Security during Hot-SoS, as well as a summer workshop for PI's and NCSU students. Guidelines for the design of defensible SoS research projects are currently under development, as well as guidelines for reporting SoS research results.

Educational

  • NTR

For more information about NCSU Lablet activities go to North Carolina State University


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


UIUC Lablet Recent Activities

UIUC Lablet Activities


The following is a brief summary of recent activities by the UIUC Lablet as reported in the SoS Quarterly Summary Report.

Fundamental Research

  • UIUC has presented several current research initiatives toward the science of security. In one such project, researchers continued evaluating the usefulness of DASH models in choosing scenarios that would effectively model multi-agent settings, initial hypotheses, and validation of resulting models. In the UIUC mobile world, a tool is currently in development that is aimed to help users distinguish malicious mobile apps by extracting contextual information, thereby allowing the user to make informed decisions. In a network model design project, researchers' goal is to model network behavior under timing uncertainty. These new models and algorithms aim to test hypotheses related to issues such as reachability, end-to-end delay, and throughout. A new application of Factor Graphs will be used to help represent real-world security risks, and to develop preemptive attack detection methods. UIUC saw development of a technique for deciding bounded time safety properties of deterministic nonlinear hybrid models, models which can capture a wide range of cyber physical systems. A research team is focused on developing quantitative decision-making tools, with a view to guide information security investments, for public and private industry by incorporating human and technological concerns. Models which incorporate human behavior are particularly valuable in understanding why and how humans attempt to bypass security measures.

Community Interaction

  • UIUC has made notable contributions to the SoS community, including a paper exploring usability challenges within health IT , which was presented at ACySE ( International Workshop on Agents and CyberSecurity) and named "among most significant papers of the year". This joins the ranks of similarly recognized works by UIUC researchers, including a paper on securing industrial control systems, currently in proceedings of the 2014 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation.

Educational

  • UIUC hosted a kickoff meeting, during which relevant analysis techniques were discussed and explored. In UIUC's educational pursuits, two summer internships were awarded. These two students were given the opportunity to work on developing tools and methodologies for providing connectivity properties of networks across multiple layers of the network stack.

For more information about UIUC Lablet activities go to University of Illinois at Urbana-Champaign


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


UMD Lablet Recent Activities

UMD Lablet Activities


The following is a brief summary of recent activities by the University of MD Lablet as reported in the SoS Quarterly Summary Report.

Fundamental Research

  • UMD has presented studies centered on understanding human factors, behavior, and influence in security. A protocol for remote electronic voting, with the human voter serving as a main participant of the protocol, has been explored. The fundamental notion of trust has been explored to help develop models, which aid in understanding the costs and benefits of collaboration as a variation of trust. This particular study directly addresses challenges of policy-governed secure collaboration. Researchers at UMD have taken culture and workplace dynamics into consideration, in a study which attempts to discover what encourages or discourages privacy and security. An empirical study highlighting graphical passwords, with a view to understand user perceptions of security in visual systems, has been conducted to improve system designs that take human perception of security into consideration. On the offensive side, researchers studied honeypots deployed at UMD to better understand the effects of different system-level aspects of intruder behavior. The disparity in security patch deployment was addressed, in a study which aims to influence the development of quantifiable metrics for assessing the security of systems.

Community Interaction

  • UMD enjoyed a "kick-off" presentation featuring members from each task in the UMD Lablet, encouraging discussion and feedback on their various works. Inter-lablet communication and cooperation is underway to help characterize and explain the five hard problems currently being studied. The 2015 ACM SIGPLAN-SIGACT Symposium on the Principles of Programming Language, held in Mumbai India, will feature UMD's proposed tutorial on software contracts.

Educational

  • In an education pursuit, several Lablet members will be teaching Fall computer security courses, topics which include integration of empirical and behavioral studies.

For more information about UMD Lablet activities go to University of Maryland


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Upcoming Events of Interest (2014 - Issue 3)

Upcoming Events of Interest (2014-03)


Mark your calendars! This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.

  • Maryland Cyber Challenge
    High school, college, and professional teams compete in the Maryland Cyber Challenge. Events include network defense, forensics, capture-the-flag, and more. (ID#:14-2262)
    Event Date: Wed, 08/29/14 - Thurs, 08/30/14
    Location: Baltimore Convention Center, Baltimore MD
    URL: http://www.fbcinc.com/e/CyberMDconference/challenge.aspx
  • CSAW CTF Cyber Competition
    CSAW CTF is competition designed for undergraduate students who are trying to break into cyber security. (ID#:14-2263)
    Event Date: Fri, 09/19/14 - Sun, 09/21/14
    Location: NYU School of Engineering, NY
    URL: https://ctf.isis.poly.edu/#
  • ESORICS 2014 - 19th European Symposium on Research in Computer Security
    The Institute of Mathematics and Computer Science, Wroclaw University of Technology presents a space for professionals in academia and industry to discuss topics including security policy, security in location services, biometrics, Ad hoc networks, applied cryptography, among many others. (ID#:14-2264)
    Event Date: Sun, 09/07/14 - Thurs, 09/11/14
    Location: Wroclaw University of Technology, Wroclaw, Poland
    URL: http://esorics2014.pwr.wroc.pl/
  • TechCrunch Disrupt
    TechCrunch Disrupt is a conference and competition to find the brightest, most talented technology startups to launch their products and services in front of investors, media, and influential members of the field. (ID#:14-2265)
    Event Date: Fri, 09/8/14 - Sun, 09/10/14
    Location: San Francisco, CA Pier 48
    URL: http://techcrunch.com/events/disrupt-sf-2014/
  • CSS 2014 : 3rd Int'l Conference on Cryptography and Security Systems
    The CSS's primary intent is to present cryptography and network security-related research and original unpublished results from current, ongoing studies. CSS encourages theoretical and/or applied research papers, case studies, and work-in-progress presentation. (ID#:14-2267)
    Event Date: Mon, 09/22/14 - Wed, 09/24/14
    Location: Lublin, Poland
    URL: http://www.css.umcs.lublin.pl/home
  • IT-SA 2014
    A popular conference held in Germany discussing a wide range of topics. The event is sponsored by top European information security companies and vendors. Topics to be discussed include cloud security, data protection, mobile security, industrial IT security, e-commerce, and more. (ID#:14-2269)
    Event Date: Tues, 10/07/14 - Thurs, 10/09/14
    Location: Nurnberg, Germany
    URL: http://www.it-sa.de/
  • Microsoft BlueHat v14 - Invitation Only
    Invitation-only biannual security conference hosted by Microsoft. The purpose is to educate and inform Microsoft engineergs and executives on current and emerging security threats. BlueHat provides the opportunity for security researchers to discuss with Microsoft engineers and collaborate. (ID#:14-2270)
    Event Date: Thurs, 10/09/14 - Fri, 10/10/14
    Location: TBA
    URL: http://technet.microsoft.com/en-us/security/dn456542.aspx
  • Black Hat Europe 2014
    A conference that invites various professionals, academics, and enthusiasts interested in information security. Briefings and Trainings are offered; Training hosted by varying computer security vendors. Briefings inform and discuss most current, popular topics, and include notable keynote speakers. (ID#:14-2271)
    Event Date: Tues, 10/14/14 - Fri, 10/17/14
    Location: Amsterdam Rai, The Netherlands
    URL: https://www.blackhat.com/eu-14/
  • Cyber Security Summit 2014
    This Symantec-sponsored event invites industry, government, and academic interests to collaborate and improve both domestic and international cyber security. Events include keynote speakers, a Q&A, panel discussions on topics such as cyber resiliency, and networking opportunities. (ID#:14-2272)
    Event Date: Tues, 10/21/14 - Wed, 10/22/14
    Location: Minneapolis, MN
    URL: http://www.cybersecuritysummit.org/
  • 21st ACM Conference on Computer and Communications Security (ACMCCS 2014)
    21st ACM Conference on Computer and Communications Security is an environment for intellectual discussion, featuring demos, talks, paper presentations, keynote speakers, and more. (ID#:14-2273)
    Event Date: Mon, 11/03/14 - Fri, 11/07/14
    Location: The Scottsdale Plaza Resort, Scottsdale, Arizona USA
    URL: http://www.sigsac.org/ccs/CCS2014/
  • DeepSec 2014
    This annual European conference is dedicated to informing and networking experts from across the globe in computer, network, and application security. Leading security professionals in academia, government, industry, and underground to attend. (ID#:14-2274)
    Event Date: 11/18/14 - 11/21/14
    Location: Imperial Riding School , Vienna Austria
    URL: https://deepsec.net/
  • Black Hat Regional Summit Sao Paulo 2014
    The Black Hat Regional Summit is an expo with conference for the information security community. Senior IT and information security professionals benefit from peer discussion of latest trends and technologies of the field. The Summit invites both regional and global experts to come together for two days of discussion and collaboration. (ID#:14-2275)
    Event Date: Tues, 11/25/14 - Wed, 11/26/14
    Location: Sao Paulo, Brazil
    URL: https://www.blackhat.com/sp-14/
  • BakuTel 2014
    Regarded as the largest telecommunications exhibition in the Caspian basin and the Caucasus, BakuTel brings international ICT companies, government, and media to discover, discuss, and be informed of newest ideas, projects, and services for security, communication and networks, analysis, and more. (ID#:14-2276)
    Event Date: Tues, 12/02/14 - Fri, 12/05/14
    Location: Baku, Azerbaijan
    URL: http://www.bakutel.az/2014/?p=index
  • ACSAC 30
    ACSAC includes training, case studies, panels, workshops, panels, speakers discussing current topics in secure computing, as well as presentations of peer-reviewed research (ACSAC has a 2013 acceptance rate of just 19%) (ID#:14-2277)
    Event Date: Mon, 12/08/14 - Fri, 12/12/14
    Location: Hyatt French Quarter, New Orleans, LA
    URL: https://www.acsac.org/

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.