Science of Security (SoS) Newsletter (2015 - Issue 2)
Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.
Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:
General Topics of Interest
General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.
Publications
The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.
Table of Contents
Science of Security (SoS) Newsletter (Vol 2015 - Issue 2)
(ID#:14-3724)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
General Topics of Interest |
This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.
(ID#:14-3728)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Science of Security (2014 Year in Review) |
Many more articles and research studies are appearing with “Science of Security” as a keyword. In 2014, the number has grown substantially. A scan of the IEEE revealed almost 800 articles listing science of security as a key word. The list is misleading, however, as a number of the citations are using different definitions. The work cited here is a year-end compendium of 2014 articles deemed relevant to the Science of Security community by the editors.
Campbell, S., "Open Science, Open Security," High Performance Computing & Simulation (HPCS), 2014 International Conference on, pp.584,587, 21-25 July 2014. doi: 10.1109/HPCSim.2014.6903739 We propose that to address the growing problems with complexity and data volumes in HPC security wee need to refactor how we look at data by creating tools that not only select data, but analyze and represent it in a manner well suited for intuitive analysis. We propose a set of rules describing what this means, and provide a number of production quality tools that represent our current best effort in implementing these ideas.
Keywords: data analysis; parallel processing; security of data; HPC security; data analysis; data representation; data selection; high performance computing; open science; open security; production quality tools; Buildings; Computer architecture; Filtering; Linux; Materials; Production; Security; High Performance Computing; Intrusion Detection; Security (ID#:15-3419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903739&isnumber=6903651
McDaniel, P.; Rivera, B.; Swami, A., "Toward a Science of Secure Environments," Security & Privacy, IEEE, vol. 12, no. 4, pp. 68, 70, July-Aug. 2014. doi: 10.1109/MSP.2014.81The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Keywords: decision making; optimisation; security of data; agile systems; continuous optimization problem; cyber environment; cyber security collaborative research alliance; cyber-decision-making; detection mechanisms; environmental security; fundamental science; network security; secure environments; software security; Approximation methods; Communities; Computational modeling; Computer security; Decision making; formal security; modeling; science of security; security; systems security (ID#:15-3420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876248&isnumber=6876237
Srivastava, M., "In Sensors We Trust -- A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#:15-3421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Uddin, M.P.; Abu Marjan, M.; Binte Sadia, N.; Islam, M.R., "Developing a Cryptographic Algorithm Based On ASCII Conversions And A Cyclic Mathematical Function," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on, pp.1,5, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850691 Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
Keywords: cryptography; encoding; matrix algebra; ASCII conversions; ASCII value; binary matrices; computer science; cryptographic algorithm; cyclic mathematical function; data decryption; data encryption; unprintable encrypted message; unprintable text; Algorithm design and analysis; Computer science; Encryption; Informatics; Information security; ASCII Conversion; Cryptography; Encryption and Decryption; Higher Level of Security; Unprintable Encrypted Message (ID#:15-3422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850691&isnumber=6850678
Pal, S.K.; Sardana, P.; Sardana, A., "Efficient search on encrypted data using bloom filter," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.412,416, 5-7 March 2014 doi: 10.1109/IndiaCom.2014.6828170
Abstract: Efficient and secure search on encrypted data is an important problem in computer science. Users having large amount of data or information in multiple documents face problems with their storage and security. Cloud services have also become popular due to reduction in cost of storage and flexibility of use. But there is risk of data loss, misuse and theft. Reliability and security of data stored in the cloud is a matter of concern, specifically for critical applications and ones for which security and privacy of the data is important. Cryptographic techniques provide solutions for preserving the confidentiality of data but make the data unusable for many applications. In this paper we report a novel approach to securely store the data on a remote location and perform search in constant time without the need for decryption of documents. We use bloom filters to perform simple as well advanced search operations like case sensitive search, sentence search and approximate search.
Keywords: {cloud computing;cost reduction;cryptography;data structures;document handling;information retrieval;Bloom filter;approximate search;case sensitive search;cloud services;computer science;cryptographic techniques;data loss;data misuse;data theft;document decryption;efficient encrypted data search;search operations;sentence search;storage cost reduction;Cloud computing;Cryptography;Filtering algorithms;Indexes;Information filters;Servers;Approximate Search and Bloom Filter;Cloud Computing;Encrypted Search}, (ID#:15-3423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828170&isnumber=6827395
Jiankun Hu; Pota, H.R.; Song Guo, "Taxonomy of Attacks for Agent-Based Smart Grids," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.7, pp.1886,1895, July 2014 doi: 10.1109/TPDS.2013.301
Abstract: Being the most important critical infrastructure in Cyber-Physical Systems (CPSs), a smart grid exhibits the complicated nature of large scale, distributed, and dynamic environment. Taxonomy of attacks is an effective tool in systematically classifying attacks and it has been placed as a top research topic in CPS by a National Science Foundation (NSG) Workshop. Most existing taxonomy of attacks in CPS are inadequate in addressing the tight coupling of cyber-physical process or/and lack systematical construction. This paper attempts to introduce taxonomy of attacks of agent-based smart grids as an effective tool to provide a structured framework. The proposed idea of introducing the structure of space-time and information flow direction, security feature, and cyber-physical causality is innovative, and it can establish a taxonomy design mechanism that can systematically construct the taxonomy of cyber attacks, which could have a potential impact on the normal operation of the agent-based smart grids. Based on the cyber-physical relationship revealed in the taxonomy, a concrete physical process based cyber attack detection scheme has been proposed. A numerical illustrative example has been provided to validate the proposed physical process based cyber detection scheme.
Keywords: {grid computing;security of data;software agents;National Science Foundation Workshop;agent-based smart grids;attack classification;critical infrastructure;cyber attack detection scheme;cyber detection scheme;cyber-physical causality;cyber-physical process;cyber-physical systems;distributed environment;dynamic environment;information flow direction;large scale environment;security feature;taxonomy of attacks;Equations;Generators;Load modeling;Mathematical model;Security;Smart grids;Taxonomy;Cyber Physical Systems (CPS);agents;critical infrastructure;power systems;security;smart grid;taxonomy}, (ID#:15-3424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6678518&isnumber=6828815
Fink, G.A.; Griswold, R.L.; Beech, Z.W., "Quantifying Cyber-Resilience Against Resource-Exhaustion Attacks," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900093 Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
Keywords: queueing theory; security of data; cyber security; cyber-resilience quantification; engineering terms; information sciences; linguistic terms; mechanical engineering; networked queuing systems; queuing theory; resource-exhaustion attacks; simple queuing system; stress-strain curve; Information systems; Queueing analysis; Resilience; Servers; Strain; Stress; Resilience; cyber systems; information science; material science; strain; stress (ID#:15-3425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900093&isnumber=6900080
Stanisavljevic, Z.; Stanisavljevic, J.; Vuletic, P.; Jovanovic, Z., "COALA - System for Visual Representation of Cryptography Algorithms," Learning Technologies, IEEE Transactions on , vol.7, no.2, pp.178,190, April-June 1 2014. doi: 10.1109/TLT.2014.2315992 Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
Keywords: {computer aided instruction;computer science education;cryptography;data visualisation;educational courses;educational institutions;further education;AES algorithm;COALA system;DES algorithm;Diffie-Hellman algorithm;RSA algorithm;School of Electrical Engineering;University of Belgrade;cryptographic algorithm visual representation;cryptography algorithms;data security course;educational institutions;educational software systems;engineering sciences;student attitudes;student knowledge acquisition;Algorithm design and analysis;Cryptography;Data visualization;Software algorithms;Visualization;AES;DES;Diffie-Hellman;RSA;algorithm visualization;cryptographic algorithms;data security;security education}, (ID#:15-3426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784486&isnumber=6847757
Kadhim, Hakem Adil; AbdulRashidx, NurAini, "Maximum-shift string matching algorithms," Computer and Information Sciences (ICCOINS), 2014 International Conference on , vol., no., pp.1,6, 3-5 June 2014 doi: 10.1109/ICCOINS.2014.6868423 The string matching algorithms have broad applications in many areas of computer sciences. These areas include operating systems, information retrieval, editors, Internet searching engines, security applications and biological applications. Two important factors used to evaluate the performance of the sequential string matching algorithms are number of attempts and total number of character comparisons during the matching process. This research proposes to integrate the good properties of three single string matching algorithms, Quick-Search, Zuh-Takaoka and Horspool, to produce hybrid string matching algorithm called Maximum-Shift algorithm. Three datasets are used to test the proposed algorithm, which are, DNA, Protein sequence and English text. The hybrid algorithm, Maximum-Shift, shows efficient results compared to four string matching algorithms, Quick-Search, Horspool, Smith and Berry-Ravindran, in terms of the number of attempts and the total number of character comparisons.
Keywords: Arabic String Matching Systems; Horspool; Hybrid String Matching; Quick-Search; Zuh Takaoka}, (ID#:15-3427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868423&isnumber=6868339
n.a, “Asymmetrical Quantum Encryption Protocol Based On Quantum Search Algorithm," Communications, China, vol. 11, no. 9, pp. 104, 111, Sept. 2014. Quantum cryptography and quantum search algorithm are considered as two important research topics in quantum information science. An asymmetrical quantum encryption protocol based on the properties of quantum one-way function and quantum search algorithm is proposed. Depending on the no-cloning theorem and trapdoor one-way functions of the public-key, the eavesdropper cannot extract any private-information from the public-keys and the ciphertext. Introducing key-generation randomized logarithm to improve security of our proposed protocol, i.e., one private-key corresponds to an exponential number of public-keys. Using unitary operations and the single photon measurement, secret messages can be directly sent from the sender to the receiver. The security of the proposed protocol is proved that it is information-theoretically secure. Furthermore, compared the symmetrical Quantum key distribution, the proposed protocol is not only efficient to reduce additional communication, but also easier to carry out in practice, because no entangled photons and complex operations are required.
Keywords: asymmetrical encryption; information-theoretical security; quantum cryptography; quantum search algorithms (ID#:15-3428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969775&isnumber=6969702
Shukla, S.; Sadashivappa, G., "Secure multi-party computation protocol using asymmetric encryption," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.780,785, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828069 Privacy preservation is very essential in various real life applications such as medical science and financial analysis. This paper focuses on implementation of an asymmetric secure multi-party computation protocol using anonymization and public-key encryption where all parties have access to trusted third party (TTP) who (1) doesn't add any contribution to computation (2) doesn't know who is the owner of the input received (3) has large number of resources (4) decryption key is known to trusted third party (TTP) to get the actual input for computation of final result. In this environment, concern is to design a protocol which deploys TTP for computation. It is proposed that the protocol is very proficient (in terms of secure computation and individual privacy) for the parties than the other available protocols. The solution incorporates protocol using asymmetric encryption scheme where any party can encrypt a message with the public key but decryption can be done by only the possessor of the decryption key (private key). As the protocol works on asymmetric encryption and packetization it ensures following: (1) Confidentiality (Anonymity) (2) Security (3) Privacy (Data).
Keywords: cryptographic protocols; data privacy; private key cryptography; public key cryptography; TTP; anonymity; anonymization; asymmetric encryption scheme; asymmetric secure multiparty computation protocol; confidentiality; decryption key; financial analysis; individual privacy; medical science; message encryption ;packetization; privacy preservation; private key; protocol design; public-key encryption; security; trusted third party; Data privacy; Encryption; Joints; Protocols; Public key; Anonymization; Asymmetric Encryption; Privacy; Secure Multi-Party Computation (SMC); Security; trusted third party (TTP) (ID#:15-3429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828069&isnumber=6827395
Lesk, M., "Staffing for Security: Don't Optimize," Security & Privacy, IEEE, vol.12, no.4, pp.71, 73, July-Aug. 2014. doi: 10.1109/MSP.2014.78 Security threats are irregular, sometimes very sophisticated, and difficult to measure in an economic sense. Much published data about them comes from either anecdotes or surveys and is often either not quantified or not quantified in a way that's comparable across organizations. It's hard even to separate the increase in actual danger from year to year from the increase in the perception of danger from year to year. Staffing to meet these threats is still more a matter of judgment than science, and in particular, optimizing staff allocation will likely leave your organization vulnerable at the worst times.
Keywords: personnel; security of data; IT security employees; data security; staff allocation optimization; Computer security; Economics; Organizations; Privacy; Software development; botnets; economics; security; security threats; staffing (ID#:15-3430)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876258&isnumber=6876237
Han, Lansheng; Qian, Mengxiao; Xu, Xingbo; Fu, Cai; Kwisaba, Hamza, "Malicious code Detection Model Based On Behavior Association," Tsinghua Science and Technology, vol.19, no.5, pp.508, 515, Oct. 2014. doi: 10.1109/TST.2014.6919827 Malicious applications can be introduced to attack users and services so as to gain financial rewards, individuals' sensitive information, company and government intellectual property, and to gain remote control of systems. However, traditional methods of malicious code detection, such as signature detection, behavior detection, virtual machine detection, and heuristic detection, have various weaknesses which make them unreliable. This paper presents the existing technologies of malicious code detection and a malicious code detection model is proposed based on behavior association. The behavior points of malicious code are first extracted through API monitoring technology and integrated into the behavior; then a relation between behaviors is established according to data dependence. Next, a behavior association model is built up and a discrimination method is put forth using pushdown automation. Finally, the exact malicious code is taken as a sample to carry out an experiment on the behavior's capture, association, and discrimination, thus proving that the theoretical model is viable.
Keywords: Automation; Computers; Grammar; Monitoring; Trojan horses; Virtual machining; behavior association; behavior monitor; malicious code; pushdown automation (ID#:15-3431)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919827&isnumber=6919815
Huang, X.; Xiang, Y.; Bertino, E.; Zhou, J.; Xu, L., "Robust Multi-Factor Authentication for Fragile Communications," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no. 6, pp.568, 581, Nov.-Dec. 2014. doi: 10.1109/TDSC.2013.2297110 In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.
Keywords: Authentication; Biometrics (access control); Digital signatures; Protocols; Servers; Telecommunication services; Authentication; efficiency; multi-factor; privacy; stand-alone (ID#:15-3432)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6701152&isnumber=6949762
Jahanirad, Mehdi; Abdul Wahab, Ainuddin Wahid; Anuar, Nor Badrul; Idna Idris, Mohd Yamani; Ayub, Mohamad Nizam, "Blind Identification Of Source Mobile Devices Using Voip Calls," Region 10 Symposium, 2014 IEEE, pp.486,491, 14-16 April 2014. doi: 10.1109/TENCONSpring.2014.6863082 Sources such as speakers and environments from different communication devices produce signal variations that result in interference generated by different communication devices. Despite these convolutions, signal variations produced by different mobile devices leave intrinsic fingerprints on recorded calls, thus allowing the tracking of the models and brands of engaged mobile devices. This study aims to investigate the use of recorded Voice over Internet Protocol calls in the blind identification of source mobile devices. The proposed scheme employs a combination of entropy and mel-frequency cepstrum coefficients to extract the intrinsic features of mobile devices and analyzes these features with a multi-class support vector machine classifier. The experimental results lead to an accurate identification of 10 source mobile devices with an average accuracy of 99.72%.
Keywords: Pattern recognition; device-based detection technique; entropy; mel-frequency cepstrum coefficients (ID#:15-3433)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863082&isnumber=6862973
Ajish, S.; Rajasree, R., "Secure Mail using Visual Cryptography (SMVC)," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp.1,7, 11-13 July 2014. doi: 10.1109/ICCCNT.2014.6963148 The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the receipent's mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.
Keywords: Electronic mail; Encryption; Heuristic algorithms; Receivers; Visualization; Wavelet transforms; chaos based image encryption algorithm; dynamic s-box algorithm; low frequency wavelet coefficient; pretty good privacy; visual cryptography; wavelet decomposition (ID#:15-3434)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963148&isnumber=6962988
Veugen, T.; de Haan, R.; Cramer, R.; Muller, F., "A Framework For Secure Computations With Two Non-Colluding Servers And Multiple Clients, Applied To Recommendations," Information Forensics and Security, IEEE Transactions on, vol. PP, no.99, pp.1, 1, 13 November 2014. doi: 10.1109/TIFS.2014.2370255 We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.
Keywords: Authentication; Computational modeling; Cryptography; Protocols; Recommender systems; Servers (ID#:15-3435)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6955802&isnumber=4358835
Schneider, S.; Lansing, J.; Fangjian Gao; Sunyaev, A., "A Taxonomic Perspective on Certification Schemes: Development of a Taxonomy for Cloud Service Certification Criteria," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4998, 5007, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.614 Numerous cloud service certifications (CSCs) are emerging in practice. However, in their striving to establish the market standard, CSC initiatives proceed independently, resulting in a disparate collection of CSCs that are predominantly proprietary, based on various standards, and differ in terms of scope, audit process, and underlying certification schemes. Although literature suggests that a certification's design influences its effectiveness, research on CSC design is lacking and there are no commonly agreed structural characteristics of CSCs. Informed by data from 13 expert interviews and 7 cloud computing standards, this paper delineates and structures CSC knowledge by developing a taxonomy for criteria to be assessed in a CSC. The taxonomy consists of 6 dimensions with 28 subordinate characteristics and classifies 328 criteria, thereby building foundations for future research to systematically develop and investigate the efficacy of CSC designs as well as providing a knowledge base for certifiers, cloud providers, and users.
Keywords: certification; cloud computing; CSC design; CSC initiatives; audit process; certification schemes; certifiers; cloud computing standards; cloud providers; cloud service certification criteria; structural characteristics; taxonomic perspective; taxonomy; Business; Certification; Cloud computing; Interviews; Security; Standards; Taxonomy; Certification; Cloud Computing; Taxonomy (ID#:15-3436)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759217&isnumber=6758592
Vijayakumar, R.; Selvakumar, K.; Kulothungan, K.; Kannan, A., "Prevention of Multiple Spoofing Attacks With Dynamic MAC Address Allocation For Wireless Networks," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.1635,1639, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6950125 In wireless networks, spoofing attack is one of the most common and challenging attacks. Due to these attacks the overall network performance would be degraded. In this paper, a medoid based clustering approach has been proposed to detect a multiple spoofing attacks in wireless networks. In addition, a Enhanced Partitioning Around Medoid (EPAM) with average silhouette has been integrated with the clustering mechanism to detect a multiple spoofing attacks with a higher accuracy rate. Based on the proposed method, the received signal strength based clustering approach has been adopted for medoid clustering for detection of attacks. In order to prevent the multiple spoofing attacks, dynamic MAC address allocation scheme using MD5 hashing technique is implemented. The experimental results shows, the proposed method can detect spoofing attacks with high accuracy rate and prevent the attacks. Thus the overall network performance is improved with high accuracy rate.
Keywords: Accuracy; Broadcasting; Cryptography; Electronic mail; Hardware; Monitoring; Wireless communication; Attacks Detection and Prevention; Dynamic MAC Address allocation; MAC Spoofing attacks; Wireless Network Security (ID#:15-3437)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950125&isnumber=6949766
Sihan Qing, "Some Issues Regarding Operating System Security," Computer and Information Science (ICIS), 2014 IEEE/ACIS 13th International Conference on, pp.1,1, 4-6 June 2014. doi: 10.1109/ICIS.2014.6912096 Summary form only given. In this presentation, several issues regarding operating system security will be investigated. The general problems of OS security are to be addressed. We also discuss why we should consider the security aspects of the OS, and when a secure OS is needed. We delve into the topic of secure OS design as well focusing on covert channel analysis. The specific operating systems under consideration include Windows and Android.
Keywords: Android (operating system);security of data; software engineering; Android; Windows; covert channel analysis; operating system security; secure OS design; Abstracts; Focusing; Information security; Laboratories; Operating systems; Standards development (ID#:15-3438)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912096&isnumber=6912089
Manning, F.J.; Mitropoulos, F.J., "Utilizing Attack Graphs to Measure the Efficacy of Security Frameworks across Multiple Applications," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4915,4920, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.602 One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.
Keywords: graph theory; security of data; attack graph analysis; attack surface; security frameworks; Authentication; Information security; Measurement; Servers; Software; Vectors; Attack graphs; information security; measurement (ID#:15-3439)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759205&isnumber=6758592
Ma, J.; Zhang, T.; Dong, M., "A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition with Security Guarantee in e-Health Applications," Biomedical and Health Informatics, IEEE Journal of, vol. PP, no. 99, pp.1,1, 12 September 2014. doi: 10.1109/JBHI.2014.2357841 This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting adaptive Fourier decomposition (AFD) algorithm hybridized with symbol substitution (SS) technique. The compression consists of two stages: 1st stage AFD executes efficient lossy compression with high fidelity; 2nd stage SS performs lossless compression enhancement and built-in data encryption which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6 to 44.5 and percentage root mean square difference (PRD) of 0.8% to 2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this work provides an attractive candidate of ECG compression method for pervasive e-health applications.
Keywords: Benchmark testing; Electrocardiography; Encoding; Encryption; Informatics; Information security; Transforms (ID#:15-3440)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897915&isnumber=6363502
Song Li; Qian Zou; Wei Huang, "A New Type Of Intrusion Prevention System," Information Science, Electronics and Electrical Engineering (ISEEE), 2014 International Conference on, vol. 1, no., pp.361, 364, 26-28 April 2014. doi: 10.1109/InfoSEEE.2014.6948132 In order to strengthen network security and improve the network's active defense intrusion detection capabilities, this paper presented and established one active defense intrusion detection system which based on the mixed interactive honeypot. The system can help to reduce the false information, enhance the stability and security of the network. Testing and simulation experiments show that: the system improved active defense of the network's security, increase the honeypot decoy capability and strengthen the attack predictive ability. So it has better application and promotion value.
Keywords: computer network security; active defense intrusion detection system; intrusion prevention system; mixed interactive honeypot; network security; Communication networks ;Computer hacking; Logic gates; Monitoring; Operating systems; Servers; Defense; Interaction honeypot; Intrusion detection; network security (ID#:15-3441)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6948132&isnumber=6948054
Al Barghuthi, N.B.; Said, H., "Ethics Behind Cyber Warfare: A Study Of Arab Citizens Awareness," Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on, pp.1,7, 23-24 May 2014. doi: 10.1109/ETHICS.2014.6893402 Persisting to ignore the consequences of Cyber Warfare will bring severe concerns to all people. Hackers and governments alike should understand the barriers of which their methods take them. Governments use Cyber Warfare to give them a tactical advantage over other countries, defend themselves from their enemies or to inflict damage upon their adversaries. Hackers use Cyber Warfare to gain personal information, commit crimes, or to reveal sensitive and beneficial intelligence. Although both methods can provide ethical uses, the equivalent can be said at the other end of the spectrum. Knowing and comprehending these devices will not only strengthen the ability to detect these attacks and combat against them but will also provide means to divulge despotic government plans, as the outcome of Cyber Warfare can be worse than the outcome of conventional warfare. The paper discussed the concept of ethics and reasons that led to use information technology in military war, the effects of using cyber war on civilians, the legality of the cyber war and ways of controlling the use of information technology that may be used against civilians. This research uses a survey methodology to overlook the awareness of Arab citizens towards the idea of cyber war, provide findings and evidences of ethics behind the offensive cyber warfare. Detailed strategies and approaches should be developed in this aspect. The author recommended urging the scientific and technological research centers to improve the security and develop defending systems to prevent the use of technology in military war against civilians.
Keywords: computer crime; ethical aspects; government data processing; Arab citizens awareness; cyber war; cyber warfare; despotic government plans; information technology; military war; personal information; scientific research centers; security systems; technological research centers; Computer hacking; Computers; Ethics; Government; Information technology; Law; Military computing; cyber army; cyber attack; cyber security; cyber warfare; defense; ethics; offence (ID#:15-3442)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6893402&isnumber=6893372
Oweis, N.E.; Owais, S.S.; Alrababa, M.A.; Alansari, M.; Oweis, W.G., "A Survey Of Internet Security Risk Over Social Networks," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.1, 4, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805970 The Communities vary from country to country. There are civil societies and rural communities, which also differ in terms of geography climate and economy. This shows that the use of social networks vary from region to region depending on the demographics of the communities. So, in this paper, we researched the most important problems of the Social Network, as well as the risk which is based on the human elements. We raised the problems of social networks in the transformation of societies to another affected by the global economy. The social networking integration needs to strengthen social ties that lead to the existence of these problems. For this we focused on the Internet security risks over the social networks. And study on Risk Management, and then look at resolving various problems that occur from the use of social networks.
Keywords: Internet; risk management; security of data; social networking (online);Internet security risk; civil society; geography climate; global economy; risk management; rural community; social networking integration; social networks; Communities; Computers; Educational institutions; Internet; Organizations; Security; Social network services; Internet risks; crimes social networking; dangers to society; hackers; social network; social risks (ID#:15-3443)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805970&isnumber=6805962
Kumar, S.; Rama Krishna, C.; Aggarwal, N.; Sehgal, R.; Chamotra, S., "Malicious Data Classification Using Structural Information And Behavioral Specifications In Executables," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp.1,6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799525 With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.
Keywords: Internet; invasive software; pattern classification; program diagnostics; NtTrace; antivirus; automated malicious programs; behavioral specifications; dynamic analysis; executable file; information systems; malicious behavior extraction; malicious data classification; malicious software detection; malicious software mitigation; malware detection system effectiveness improvement; malware detection system efficiency improvement; malwares; obfuscation technology; security devices; signature database; signature-based detection system; static analysis; structural information; threat detection; threat mitigation; underground Internet economy; zero-day malware detection; Algorithm design and analysis; Classification algorithms; Feature extraction; Internet; Malware; Software; Syntactics; behavioral specifications; classification algorithms; dynamic analysis; malware detection; static analysis; system calls (ID#:15-3444)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799525&isnumber=6799496
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
In the News |
This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.
(ID#:14-3725)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International News |
“In India, is web censorship justified in the name of national security?”, PBS News, 26 January 2015. In a new effort by India’s government to deter terrorist organizations from recruiting members and disseminating propaganda, the government has blocked sites such as GitHub, the Internet Archive, Vimeo, Pastebin, and more, all while keeping the blocks a secret. Indian government is bound by law to keep these changes under wraps. Critics are calling the move ineffective, and a blight on internet freedoms. (ID# 14-70104) See: http://www.pbs.org/newshour/updates/censorship-justified-name-national-security/
“Forbes web site was compromised by Chinese cyberespionage group researchers say”, The Washington Post, 10 February 2015. For three days, Forbes.com was unwittingly redirecting visitors from targeted organizations to a malicious third party site. Researchers are naming Codoso, a Chinese cyberespionage group, as the perpetrators. (ID# 14-70105) See: http://www.washingtonpost.com/blogs/the-switch/wp/2015/02/10/forbes-web-site-was-compromised-by-chinese-cyberespionage-group-researchers-say/
“Implanted RFID chip controls office access for Stockholm wokers”, EuroNews, 11 February 2015. Gone are the days of badge or printed pass authentication – at least in Stockholm anyway. Employees of the Epicenter office have been newly outfitted with RFID chips, implanted into the hand via syringe. Though wearing the chip is entirely voluntary, the notion has raise several privacy concerns. (ID# 14-70106) See: http://www.euronews.com/2015/02/11/implanted-rfid-chip-controls-office-access-for-stockholm-workers/
“Dutch government website outage caused by cyber attack”, Reuters, 11 February 2015. Dutch government main websites were crippled by DDoS attacks, rendering them inoperable for more than seven hours. The attack capitalized on the complexity and size of government websites to render backups ineffective. (ID# 14-70107) See: http://www.reuters.com/article/2015/02/11/us-netherlands-government-websites-idUSKBN0LF0N320150211
“Microsoft patches security flaw allegedly used by Chinese hackers to target U.S. Government”, IB Times, 11 February 2015. A series of Microsoft patches have been issued for vulnerabilities exploited by Chinese hackers, who compromised several websites, including Forbes.com in a “watering-hole” style attack. Experts identified this type of attack as a “chained zero-day exploit”. (ID# 14-70108) See: http://www.ibtimes.com/microsoft-patches-security-flaw-allegedly-used-chinese-hackers-target-us-government-1812306
“Malware Links on U.S. car-defect website risked infecting users”, Bloomberg, 12 February 2015. A U.S. government database, used by motorists to report car defects, has been the subject of scrutiny after hundreds of infected files had to be removed. The database contained documents with malicious links leading users to a third-party site, where malware could infect their computers. Some of these files have been compromised and undetected for 10 years or more. (ID# 14-70109) See: http://www.bloomberg.com/news/articles/2015-02-12/malware-links-on-u-s-car-defect-website-risked-infecting-users
“U.S. has raised concerns with China about new cyber rules: official”, Reuters, 13 February 2015. New cybersecurity rules in China are seen by the Obama administration as a “major barrier” to trade. The new rules require technology vendors in China to provide source code, and to adopt Chinese encruption algorithms. Though China’s Foreign Ministry spokeswoman insisted that China was committed to interacting with the outside world, these newest cybersecurity policies seem to say the opposite. (ID# 14-70110 See: http://www.reuters.com/article/2015/02/13/us-usa-china-cyber-idUSKBN0LG26420150213
“European banks getting targeted by malware”, SC Magazine UK, 13 February 2015. Findings released by Minded Security, a software security company, revealed that at least one in twenty devices used by European banking customers are infected with malware. The malware consisted of three percent unwanted adware, 1.5 percent spyware, and 0.5 percent banking malware. (ID# 14-70111) See: http://www.scmagazineuk.com/european-banks-getting-targeted-by-malware/article/398091/
“Report: Using malware, hackers steal millions from banks”, NPR, 16 February 2015. Hackers have made away with millions of dollars from up to 100 banks around the world. Kaspersky Lab has detailed the process of what it is calling “the most successful criminal cyber campaign”, executed by a combination of phishing bank employees and manipulating ATMs. Upon infecting machines, hackers waited until they hit an administrator computer, upon which keylogging and social engineering was leveraged to gain unauthorized access. Money was transferred to offshore accounts in Russia, Switzerland, Japan, the US, and the Netherlands. (ID# 14-70112) See: http://www.npr.org/blogs/thetwo-way/2015/02/16/386739804/report-using-malware-hacker-steal-millions-from-banks
“UK’s RBS launches fingerprint technology for mobile banking app”, Reuters, 17 February 2015. The Royal Bank of Scotland (RBS) has become the first bank to allow customers to authenticate using their fingerprints while on mobile devices. RBS has introduced the new service for 880,000 customers using Apple iPhones with the downloaded app. (ID# 14-70113) See: http://www.reuters.com/article/2015/02/18/us-rbs-technology-idUSKBN0LM00K20150218
“JP Morgan goes to war”, Bloomberg, 19 February 2015. Recent cyberattacks on JP Morgan has spurred the creation of a security operation staffed largely with ex-military officers. The banking empire is gearing up against potential attacks from China, Iran, and Russia, and names the US government for being unable to prevent or respond to such breaches. The FBI recently dismissed JP Morgan’s claims that the recent attacks, which were traced back to a data center in St. Petersburg, Russia, were implemented with nation-state influence, but rather by a criminal actor. (ID# 14-70115) See: http://www.bloomberg.com/news/articles/2015-02-19/jpmorgan-hires-cyberwarriors-to-repel-data-thieves-foreign-powers
“Lenovo to stop pre-installing controversial software”, Reuters, 19 February 2015. Lenovo Group Ltd., the Chinese-based PC-making titan, has been under scrutiny for pre-installing the software “Superfish”, on consumer laptops, which allows third-party surveillance. The software enables itself to take over connections and determine them secure, even if they are not. Experts have condemned “Superfish” as malicious adware that can expose devices to exploitation. (ID# 14-70116) See: http://www.reuters.com/article/2015/02/19/us-lenovo-cybersecurity-idUSKBN0LN0XI20150219
“Currency security breaches hidden by Indian government: Report”, CNBC, 20 February 2015. Security compromises in the printing of rupees were discovered in 2012, only to be intentionally covered up by Indian officials. An internal investigation revealed that a security thread inserted into rupees were from an Islamic nation, and was not recognizable. (ID# 14-70117) See: http://www.cnbc.com/id/102439297#
“Bitcoin hack report suggests inside job”, CNBC, 20 February 2015. An investigation into the now defunct Mt Gox, a Tokyo bitcoin exchange, revealed that hundreds of thousands of coins were purchased with fake money by an automated bot. By setting up accounts with fake US dollar balances, the bot was able to buy and withdraw coins. Mt Gox stated that it had lost track of 850,000 coins worth nearly $500 million. (ID# 14-70118) See: http://www.cnbc.com/id/102442027
“Italian privacy watchdog says to conduct inspections at Google U.S. offices”, Reuters, 20 February 2015. Google has agreed to inspections at its headquarters, in what marks as the first time a European Union regulator will inspect a company inside U.S. territory. Following investigations by several EU data protection authorities, the Italian data protection authority has emphasized ensuring its citizens’ data is handled in compliance with EU law. (ID# 14-70119) See: http://www.reuters.com/article/2015/02/20/us-google-privacy-italy-idUSKBN0LO22V20150220
(ID#:14-3726)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
US News |
"M-Trends report: Nearly 70 percent of breached firms alerted by outside source", SC Magazine, 24 February 2015. According to a report by FireEye, many organizations (including law enforcement) lack sufficient breach detection capabilities. FireEye found that 69% of breached entities were notified by an external source, and it wasn't until over 200 days that organizations detected intrusions, on average. Attackers use a mix of innovative and tried-and-true methods to hack into VPN's, steal credentials, and more. (ID: 14-50220) See http://www.scmagazine.com/m-trends-report-nearly-70-percent-of-breached-firms-alerted-by-outside-source/article/399928/
"New Jersey Congressmen to reintroduce privacy bill", SC Magazine, 23 February 2015. Following the dramatic increase of high-profile data breaches in the recent few years, Sen. Robert Menendez (D-NJ) and Rep. Albio Sires (D-NJ) urged other lawmakers to reintroduce legislation that was intended to protect consumers from data breaches. Menedez also persuaded Federal Trade Commission (FTC) Chairwoman Edith Ramirez to ask Congress to give the FTC greater authority to penalize companies that put consumer's data at risk. (ID: 14-50221) See http://www.scmagazine.com/legislation-would-offer-bill-of-rights-as-breach-protection/article/399712/
"After Superfish-Lenovo incident, Facebook probes larger issue of SSL-sniffing adware", SC Magazine, 23 February 2015. In the wake of Lenovo's Superfish scandal, Facebook researchers investigated other applications that use the same SSL decryption library used by Superfish. They found "over a dozen" other applications that work similarly to Superfish, thereby allowing MitM attacks on SSL communications and giving hackers the ability to intercept encrypted communications like online banking. (ID: 14-50222) See http://www.scmagazine.com/superfish-lenovo-incident-sparks-broader-facebook-investigation/article/399706/
"Google Cloud Security Scanner released in beta", SC Magazine, 23 February 2015. The Google Cloud Security Scanner beta has been released by Google, who designed the tool to be used to scan for vulnerabilities in apps running in the Cloud. Unlike similar programs that use basic approaches like HTML scanning, the Cloud Security Scanner uses more advanced parsing and rendering techniques to lower rates of false-positives and increase usability. (ID: 14-50223) See http://www.scmagazine.com/google-cloud-security-scanner-nearly-wipes-out-false-positives/article/399700/
"On Patch Tuesday, Microsoft unveils fix for critical Windows flaw 'JASBUG'", SC Magazine, 10 February 2015. Microsoft patched a major vulnerability known as JASBUG, which could give an attack complete control over a system. Though the vulnerability was reported to Microsoft in January of last year, JASBUG's nature as a "fundamental design flaw" meant that Microsoft had to spend an entire year to "re-engineer core components of the operating system and to add several new features.” (ID: 14-50225) See http://www.scmagazine.com/microsoft-addressed-56-bugs-issues-fix-for-jasbug/article/397477/
"Community debates encryption's value in Anthem incident", SC Magazine, 06 February 2015. Following the breach of health insurer Anthem Inc., questions are being raised as to how much of an impact encryption really had on minimizing the impact of the attack. Technically, health ensurers are not required to encrypt protected health information; however, some believe that even if Anthem's data was encrypted, the fact that the intruder had elevated credentials would have made encryption useless. (ID: 14-50226) See http://www.scmagazine.com/anthem-breach-sparks-discourse-on-encryption/article/396989/
"BlackBerry Names New Chief Security Officer", Security Magazine, 10 February 2015. David Kleidermacher has replaced Scott Totzke as the Chief Technology Officer (CTO) at Blackberry Ltd. Blackberry hopes that Kleidermacher's experience with the Internet of Things and embedded systems will help Blackberry meet its security goals. (ID: 14-50227) See http://www.securitymagazine.com/articles/86093-blackberry-names-new-chief-security-officer
"Bank Hackers Steal Millions via Malware", The New York Times, 14 February 2015. In what experts think could be the biggest bank theft ever, cybercrime group "Carbanak" was found by Kaspersky Labs to have stolen a minimum of $300 million from over 100 financial institutions in 30 countries. The sophisticated attack utilized remote-access tools to monitor the activities of bank employees. The hackers were then able to gain access to ATM and dispense money on command, or move money between accounts. (ID: 14-50228) See http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?_r=1
"EU parliament bans the Microsoft mobile Outlook app", Cyber Defense Magazine, 17 February 2015. Due to security and privacy concerns over Microsoft's mobile Outlook app, the EU parliament has decided to ban politicians from using the app. It is feared that sensitive data from politicians could fall into the wrong hands, though Microsoft denies claims that the app is vulnerable because credentials are “double-encrypted using a server per-account unique key”. (ID: 14-50229) See http://www.cyberdefensemagazine.com/eu-parliament-bans-the-microsoft-mobile-outlook-app/
"Obama signed a new Executive Order on sharing cyber threat information", Cyber Defense Magazine, 16 February 2015. U.S. President Obama has signed an executive order that is intended to promote cyber intelligence sharing between industry and government. Throughout his term -- which has likely seen more cyber issues than any other presidency -- president Obama has had a strong focus on promoting cyber security. (ID: 14-50230) See http://www.cyberdefensemagazine.com/obama-signed-a-new-executive-order-on-sharing-cyber-threat-information/
"Dyre banking trojan tweaked to spread Upatre malware via Microsoft Outlook", Cyber Defense Magazine, 04 February 2015. The Dyre banking trojan, which became famous last summer for bypassing SSL and targeting global banks, has been re-vamped for 2015. Dyre uses Microsoft Outlook, in conjunction with Upatre malware and advanced evasion techniques, to intercept sensitive data and propagate. The University of Florida became a notable victim after hundreds of university computers were infected within hours. (ID: 14-50231) See http://www.cyberdefensemagazine.com/dyre-banking-trojan-tweaked-to-spread-upatre-malware-via-microsoft-outlook/
"Lenovo Releases Superfish Removal Tool", Infosecurity Magazine, 23 February 2015. After facing harsh criticism for shipping laptops pre-loaded with the "Superfish" adware, Lenovo has released a tool that users can use to remove the controversial software. Days before, the US-CERT warned Lenovo customers that Superfish left them vulnerable to SSL spoofing attacks. Because Superfish uses its own CA certificate, it would be very easy for a hacker to trick an affected machine into trusting fake versions of websites. (ID: 14-50232) See http://www.infosecurity-magazine.com/news/lenovo-releases-superfish-removal/
"Gemalto SIM Cards Hacked by American, British Spies—Report", Infosecurity Magazine, 20 February 2015. Dutch SIM card manufacturer Gemalto is investigating a purported breach by British and American intelligence agencies. According to Snowden documents, the NSA and GCHQ have both obtained SIM card encryption keys in order to collect telecommunications data. (ID: 14-50233) See http://www.infosecurity-magazine.com/news/gemalto-sim-cards-hacked-spies/
"Kasperky Lab Unveils ‘Equation’: the Grand Daddy of APT Groups", Infosecurity Magazine, 17 February 2015. Kaspersky Labs has brought to light what it calls the "Equation Group": a 20-year old, well-resourced cyber attack group that has a particular aptitude towards advanced encryption and obfuscation. Equation appears to primarily target governments in the Middle East and Asia, and has also used techniques that are very synonymous with the methods used by Stuxnet. (ID: 14-50234) See http://www.infosecurity-magazine.com/news/kasperky-equation-group-grand/
(ID#:14-3727)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Conferences |
The following pages provide highlights on Science of Security related research presented at the following International Conferences:
(ID#:14-3729)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Computer Science and Information Systems (2014) Poland |
The 2014 Federated Conference on Computer Science and Information Systems (FedCSIS) was held 7-10 September 2014 in Warsaw, Poland. More than 200 papers were presented. This bibliography is a sampling of papers related to the Science of Security.
Yamamoto, D.; Takenaka, M.; Sakiyama, K.; Torii, N., "Security Evaluation of Bistable Ring PUFs on FPGAs Using Differential and Linear Analysis," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp. 911, 918, 7-10 Sept. 2014. doi: 10.15439/2014F122 Physically Unclonable Function (PUF) is expected to be an innovation for anti-counterfeiting devices for secure ID generation, authentication, etc. In this paper, we propose novel methods of evaluating the difficulty of predicting PUF responses (i.e. PUF outputs), inspired by well-known differential and linear cryptanalysis. According to the proposed methods, we perform a first third-party evaluation for Bistable Ring PUF (BR-PUF), proposed in 2011. The BR-PUFs have been claimed that they have a resistance against the response predictions. Through our experiments using FPGAs, we demonstrate, however, that BR-PUFs have two types of correlations between challenges and responses, which may cause the easy prediction of PUF responses. First, the same responses are frequently generated for two challenges (i.e. PUF inputs) with small Hamming distance. A number of randomly-generated challenges and their variants with Hamming distance of one generate the same responses with the probability of 0.88, much larger than 0.5 in ideal PUFs. Second, particular bits of challenges in BR-PUFs have a great impact on the responses. The value of responses becomes `1' with the high probability of 0.71 (> 0.5) when just particular 5 bits of 64-bit random challenges are forced to be zero or one. In conclusion, the proposed evaluation methods reveal that BR-PUFs on FPGAs have some correlations of challenge-response pairs, which helps an attacker to predict the responses.
Keywords: cryptography; field programmable gate arrays; BR-PUF; FPGA; Hamming distance; bistable ring PUF security evaluation; challenge-response pairs; differential cryptanalysis; linear cryptanalysis; physically unclonable function; randomly-generated challenges; Cryptography ;Education; Field programmable gate arrays; Ink; Logic gates; Wires (ID#: 15-3484)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933112&isnumber=6932982
Naumiuk, R.; Legierski, J., "Anonymization of Data Sets From Service Delivery Platforms," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.955,960, 7-10 Sept. 2014. doi: 10.15439/2014F177 The paper presents an anonymization of telecommunication data sets collected through Service Delivery Platforms (SDP), and describes an example tool SDPAnonymizer to make such operation. Information from SDP are processed in form of log files, consisting data sets, which show activity of users of APIs (Application Programming Interfaces). Data sets which should be anonymized contain sensitive data, for example: Names, MSISDN numbers (Mobile Station International Subscriber Directory Numbers) or IP addresses processed by Service Delivery Platforms..
Keywords: Internet; computer network security; telecommunication services; SDPAnonymizer tool; application programming interfaces; log files; service delivery platforms; telecommunication data set anonymization; users API activity; Algorithm design and analysis; Computer science; Data privacy; IP networks; Information systems; Mobile communication (ID#: 15-3485)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933119&isnumber=6932982
Wangen, G.; Snekkenes, E.A., "A Comparison Between Business Process Management And Information Security Management," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.901, 910, 7-10 Sept. 2014. doi: 10.15439/2014F77 Information Security Standards such as NIST SP 800-39 and ISO/IEC 27005:2011 are turning their scope towards business process security. And rightly so, as introducing an information security control into a business-processing environment is likely to affect business process flow, while redesigning a business process will most certainly have security implications. Hence, in this paper, we investigate the similarities and differences between Business Process Management (BPM) and Information Security Management (ISM), and explore the obstacles and opportunities for integrating the two concepts. We compare three levels of abstraction common for both approaches; top-level implementation strategies, organizational risk views & associated tasks, and domains. With some minor differences, the comparisons shows that there is a strong similarity in the implementation strategies, organizational views and tasks of both methods. The domain comparison shows that ISM maps to the BPM domains; however, some of the BPM domains have only limited support in ISM.
Keywords: ISO standards; business data processing; security of data; BPM; ISM; ISO/IEC 27005:2011 standard; NIST SP 800-39 standard; business process flow; business process management; business process redesign; business process security; business processing environment ;information security control ;information security management; information security standards; IEC standards; ISO standards; Information security; Organizations; Standards organizations; BPM Methodology Framework; Business Process Management; ISO/IEC 27001;ISO/IEC 27002;ISO/IEC 27005;Information Security; Information Security Risk Management; NIST SP 800-39 (ID#: 15-3486)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933111&isnumber=6932982
Krendelev, S.F.; Yakovlev, M.; Usoltseva, M., "Order-preserving Encryption Schemes Based On Arithmetic Coding And Matrices," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.891, 899, 7-10 Sept. 2014. doi: 10.15439/2014F186 In this article we describe two alternative order-preserving encryption schemes. First scheme is based on arithmetic coding and the second scheme uses sequence of matrices for data encrypting. In the beginning of this paper we briefly describe previous related work published in recent time. Then we propose alternative variants of OPE and consider them in details. We examine drawbacks of these schemes and suggest possible ways of their improvement. Finally we present statistical results of implemented prototypes and discuss further work.
Keywords: arithmetic codes; cryptography; OPE; arithmetic coding; data encryption; order-preserving encryption; Educational institutions; Encoding; Encryption; Generators; Linear approximation; Polynomials (ID#: 15-3487)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933110&isnumber=6932982
Shatilov, K.; Boiko, V.; Krendelev, S.; Anisutina, D.; Sumaneev, A., "Solution for Secure Private Data Storage In A Cloud," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.885,889, 7-10 Sept. 2014. doi: 10.15439/2014F43 Cloud computing and, more particularly, cloud databases, is a great technology for remote centralized data managing. However, there are some drawbacks including privacy issues, insider threats and potential database thefts. Full encryption of remote database does solve the problem, but disables many operations that can be held on DBMS side; therefore problem requires much more complex solution and specific encryptions. In this paper, we propose a solution for secure private data storage that protects confidentiality of user's data, stored in cloud. Solution uses order preserving and homomorphic proprietary developed encryptions. Proposed approach includes analysis of user's SQL queries, encryption of vulnerable data and decryption of data selection, returned from DBMS. We have validated our approach through the implementation of SQL queries and DBMS replies processor, which will be discussed in this paper. Secure cloud database architecture and used encryptions also will be covered.
Keywords: cloud computing; cryptography; data privacy; distributed databases; DBMS replies processor; SQL queries; cloud computing; cloud databases; data selection; database thefts; encryption; privacy issues; remote centralized data managing; remote database; secure cloud database architecture; secure private data storage; user data; vulnerable data; Encoding; Encryption; Query processing; Vectors (ID#: 15-3488)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933109&isnumber=6932982
Machida, T.; Yamamoto, D.; Iwamoto, M.; Sakiyama, K., "A New Mode Of Operation For Arbiter PUF To Improve Uniqueness on FPGA," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.871,878, 7-10 Sept. 2014. doi: 10.15439/2014F140 Arbiter-based Physically Unclonable Function (PUF) is one kind of the delay-based PUFs that use the time difference of two delay-line signals. One of the previous work suggests that Arbiter PUFs implemented on Xilinx Virtex-5 FPGAs generate responses with almost no difference, i.e. with low uniqueness. In order to overcome this problem, Double Arbiter PUF was proposed, which is based on a novel technique for generating responses with high uniqueness from duplicated Arbiter PUFs on FPGAs. It needs the same costs as 2-XOR Arbiter PUF that XORs outputs of two Arbiter PUFs. Double Arbiter PUF is different from 2-XOR Arbiter PUF in terms of mode of operation for Arbiter PUF: the wire assignment between an arbiter and output signals from the final selectors located just before the arbiter. In this paper, we evaluate these PUFs as for uniqueness, randomness, and steadiness. We consider finding a new mode of operation for Arbiter PUF that can be realized on FPGA. In order to improve the uniqueness of responses, we propose 3-1 Double Arbiter PUF that has another duplicated Arbiter PUF, i.e. having 3 Arbiter PUFs and output 1-bit response. We compare 3-1 Double Arbiter PUF to 3-XOR Arbiter PUF according to the uniqueness, randomness, and steadiness, and show the difference between these PUFs by considering the mode of operation for Arbiter PUF. From our experimental results, the uniqueness of responses from 3-1 Double Arbiter PUF is approximately 50%, which is better than that from 3-XOR Arbiter PUF. We show that we can improve the uniqueness by using a new mode of operation for Arbiter PUF.
Keywords: asynchronous circuits; field programmable gate arrays;2-XOR arbiter PUF;3-1 double arbiter PUF; FPGA; XORs; arbiter-based physically unclonable function; delay-based PUFs; delay-line signals; double Arbiter PUF; time difference; wire assignment; Delays; Electronic mail; Field programmable gate arrays; Hamming weight; Organizations; Wires (ID#: 15-3489)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933107&isnumber=6932982
Chmielecki, T.; Cholda, P.; Pacyna, P.; Potrawka, P.; Rapacz, N.; Stankiewicz, R.; Wydrych, P., "Enterprise-oriented Cybersecurity Management," Computer Science and Information Systems (FedCSIS), 2014. Federated Conference on, pp.863,870, 7-10 Sept. 2014. doi: 10.15439/2014F38 Information technology is widely used in processes vital to enterprises. Therefore, IT systems must meet at least the same level of security as required from the business processes supported by these systems. In this paper, we present a view on cybersecurity management as an enterprise-centered process, and we advocate the use of enterprise architecture in security management. Activities such as risk assessment, selection of security controls, as well as their deployment and monitoring should be carried out as a part of enterprise architecture activity. A set of useful frameworks and tools is presented and discussed.
Keywords: risk management; security of data; business process; enterprise architecture; enterprise-centered process; enterprise-oriented cybersecurity management; information technology; risk assessment; security control selection; security deployment; security monitoring; Computer architecture; Computer security; Monitoring; Risk management; Unified modeling language (ID#: 15-3490)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933106&isnumber=6932982
Ustimenko, V., "On Multivariate Cryptosystems Based On Maps With Logarithmically Invertible Decomposition Corresponding To Walk On Graph," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.631,637, 7-10 Sept. 2014.doi: 10.15439/2014F269 The paper illustrates the concept of the map with logarithmically invertible decomposition. We introduce families of multivariate cryptosystems such that there security level is connected with discrete logarithm problem in Cremona group. The private key of such cryptosystem is a modification of graph based stream ciphers which use stable multivariate maps. Modified version corresponds to a stable map with single disturbance. If the disturbance (or initial condition) allows fast computation then modified version is almost as robust as original one. Methods of modification improve the resistance of such stream ciphers implemented on numerical level to straightforward linearisation attacks.
Keywords: graph theory; private key cryptography; Cremona group; discrete logarithm problem; graph walk; linearisation attacks; logarithmically invertible decomposition; multivariate cryptosystems; multivariate maps; private key cryptosystem; security level; stream cipher; Ciphers; Encryption; Modules (abstract algebra);Polynomials; Resistance (ID#: 15-3491)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933073&isnumber=6932982
Tataru, R.-L., "Image Hashing Secured With Chaotic Sequences," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.735,740, 7-10 Sept. 2014. doi: 10.15439/2014F250 This paper presents an image hashing algorithm using robust features from jointed frequency domains. Extracted features are enciphered using a secure chaotic system. The proposed hashing scheme is robust to JPEG compression with low quality factors. This scheme also withstands several image processing attacks such us filtering, noise addition and some geometric transforms. All attacks were conducted using Checkmark benchmark. A detailed analysis was conducted on a set of 3000 color and gray images from three different image databases. The security of the method is assured by the robustness of the chaotic PRNG and the secrecy of the cryptographic key.
Keywords: cryptography; feature extraction; image coding; image colour analysis; Checkmark benchmark; JPEG compression; chaotic PRNG; chaotic sequences; color image; cryptographic key; feature extraction; frequency domain; gray image; image hashing; image processing attack; robust features; secure chaotic system; Chaos; Databases; Discrete cosine transforms; Feature extraction; Image coding; Robustness (ID#: 15-3492)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933086&isnumber=6932982
Stojmenovic, I.; Sheng Wen, "The Fog Computing Paradigm: Scenarios and Security Issues," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.1, 8, 7-10 Sept. 2014. doi: 10.15439/2014F503 : Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device.
Keywords: cloud computing; data privacy; trusted computing; CPU consumption; Fog device; cloud computing; cloud services; fog computing paradigm; man-in-the-middle attack; memory consumption; privacy issue; security issue; smart grid; smart traffic lights; software defined networks; vehicular networks; Cloud computing; Companies; Intelligent sensors; Logic gates; Security; Wireless sensor networks; Cloud Computing; Fog Computing; Internet of Things; Software Defined Networks (ID#: 15-3493)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932989&isnumber=6932982
Aref, A.; Tran, T., "Using Fuzzy Logic And Q-Learning For Trust Modeling In Multi-Agent Systems," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.59,66, 7-10 Sept. 2014. doi: 10.15439/2014F482 Often in multi-agent systems, agents interact with other agents to fulfill their own goals. Trust is, therefore, considered essential to make such interactions effective. This work describes a trust model that augments fuzzy logic with Q-learning to help trust evaluating agents select beneficial trustees for interaction in uncertain, open, dynamic, and untrusted multi-agent systems. The performance of the proposed model is evaluated using simulation. The simulation results indicate that the proper augmentation of fuzzy subsystem to Q-learning can be useful for trust evaluating agents, and the resulting model can respond to dynamic changes in the environment.
Keywords: fuzzy logic; fuzzy systems; learning (artificial intelligence);multi-agent systems; trusted computing; Q-learning; beneficial trustees; fuzzy logic; fuzzy subsystem; multiagent systems; trust evaluating agents; trust modeling; Analytical models; Engines; Estimation; Fuzzy logic; Mathematical model; Multi-agent systems; Suspensions (ID#: 15-3494)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932997&isnumber=6932982
Jasiul, B.; Sliwa, J.; Gleba, K.; Szpyrka, M., "Identification of Malware Activities with Rules," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp. 101, 110, 7-10 Sept. 2014. doi: 10.15439/2014F265 The article describes the method of malware activities identification using ontology and rules. The method supports detection of malware at host level by observing its behavior. It sifts through hundred thousands of regular events and allows to identify suspicious ones. They are then passed on to the second building block responsible for malware tracking and matching stored models with observed malicious actions. The presented method was implemented and verified in the infected computer environment. As opposed to signature-based antivirus mechanisms it allows to detect malware the code of which has been obfuscated.
Keywords: data mining; invasive software; infected computer environment; malware activities identification; malware detection; malware tracking; ontology; signature-based antivirus mechanisms; Computers; Engines; Knowledge based systems; Malware; Monitoring; Ontologies (ID#: 15-3495)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933002&isnumber=6932982
Kalisch, M.; Przystalka, P.; Timofiejczuk, A., "Application of Selected Classification Schemes For Fault Diagnosis Of Actuator Systems," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.1381, 1390, 7-10 Sept. 2014. doi: 10.15439/2014F158 The paper presents the application of various classification schemes for actuator fault diagnosis in industrial systems. The main objective of this study is to compare either single or meta-classification strategies that can be successfully used as reasoning means in off-line as well as on-line diagnostic expert systems. The applied research was conducted on the assumption that only classic and well-practised classification methods would be adopted. The comparison study was carried out within the DAMADICS benchmark problem which provides a popular framework for confronting different approaches in the development of fault diagnosis systems.
Keywords: actuators; control engineering computing; diagnostic expert systems; fault diagnosis; manufacturing systems; pattern classification; production engineering computing; DAMADICS benchmark problem; actuator fault diagnosis systems; classification schemes; industrial systems; meta-classification strategies; off-line diagnostic expert systems; on-line diagnostic expert systems; reasoning means; Actuators; Benchmark testing; Computational modeling; Decision trees; Fault detection; Fault diagnosis; Valves (ID#: 15-3496)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933179&isnumber=6932982
Nai-Wei Lo; Yohan, A., "Danger Theory-Based Privacy Protection Model For Social Networks," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.1397, 1406, 7-10 Sept. 2014. doi: 10.15439/2014F129 Privacy protection issues in Social Networking Sites (SNS) usually raise from insufficient user privacy control mechanisms offered by service providers, unauthorized usage of user's data by SNS, and lack of appropriate privacy protection schemes for user's data at the SNS servers. In this paper, we propose a privacy protection model based on danger theory concept to provide automatic detection and blocking of sensitive user information revealed in social communications. By utilizing the dynamic adaptability feature of danger theory, we show how a privacy protection model for SNS users can be built with system effectiveness and reasonable computing cost. A prototype based on the proposed model is constructed and evaluated. Our experiment results show that the proposed model achieves 88.9% detection and blocking rate in average for user-sensitive data revealed by the services of SNS.
Keywords: data privacy; social networking (online); SNS; danger theory; dynamic adaptability feature; privacy protection; social communication; social networking sites; user privacy control mechanism; Adaptation models; Cryptography; Data privacy; Databases; Immune system; Privacy; Social network services (ID#: 15-3497)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933181&isnumber=6932982
Zedadra, O.; Seridi, H.; Jouandeau, N.; Fortino, G., "S-MASA: A Stigmergy Based Algorithm For Multi-Target Search," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.1477,1485, 7-10 Sept. 2014. doi: 10.15439/2014F395 We explore the on-line problem of coverage where multiple agents have to find a target whose position is unknown, and without a prior global information about the environment. In this paper a novel algorithm for multi-target search is described, it is inspired from water vortex dynamics and based on the principle of pheromone-based communication. According to this algorithm, called S-MASA (Stigmergic Multi Ant Search Area), the agents search nearby their base incrementally using turns around their center and around each other, until the target is found, with only a group of simple distributed cooperative Ant like agents, which communicate indirectly via depositing/detecting markers. This work improves the search performance in comparison with random walk and S-random walk (stigmergic random walk) strategies, we show the obtained results using computer simulations.
Keywords: multi-agent systems; search problems; S-MASA; S-random walk strategies; computer simulations; distributed cooperative ant like agents; multiple agents; multitarget search; pheromone-based communication; random walk strategies; stigmergic multiant search area; stigmergic random walk strategies; stigmergy based algorithm; water vortex dynamics; Base stations; Heuristic algorithms; Robot kinematics; Robustness; Search problems; Sensors (ID#: 15-3498)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933192&isnumber=6932982
Chakraborty, M.; Chaki, N.; Cortesi, A., "A New Intrusion Prevention System For Protecting Smart Grids From Icmpv6 Vulnerabilities," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp.1539, 1547, 7-10 Sept. 2014. doi: 10.15439/2014F287 Smart Grid is an integrated power grid with a reliable, communication network running in parallel towards providing two way communications in the grid. It's trivial to mention that a network like this would connect a huge number of IP-enabled devices. IPv6 that offers 18-bit address space becomes an obvious choice in this context. In a smart grid, functionalities like neighborhood discovery, autonomic address configuration of a node or its router identification may often be invoked whenever newer equipments are introduced for capacity enhancement at some level of hierarchy. In IPv6, these basic functionalities like neighborhood discovery, autonomic address configuration of networking require to use Internet Control Message Protocol version 6 (ICMPv6). Such usage may lead to security breaches in the grid as a result of possible abuses of ICMPv6 protocol. In this paper, some potential newer attacks on Smart Grid have been discussed. Subsequently, intrusion prevention mechanisms for these attacks are proposed to plugin the threats.
Keywords: {P networks; computer network security; power engineering computing; power system protection; smart power grids; transport protocols;ICMPv6 vulnerabilities; IP-enabled devices; Internet control message protocol version 6;intrusion prevention mechanisms; intrusion prevention system; neighborhood discovery; node autonomic address configuration; router identification; smart grid protection; Registers; Routing protocols; Security; Smart grids; Smart meters; Unicast (ID#: 15-3499)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933200&isnumber=6932982
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: IEEE Information Theory Workshop (2014) |
The 2014 IEEE Information Theory Workshop (ITW) was held 2-5 Nov. 2014 in Hobart, Tasmania, Australia. The program covered a broad range of topics in Coding and Information theory with a variety of new applications. The works cited here are those deemed by the editors to be most relevant to the Science of Security.
Liu, Shuiyin; Hong, Yi; Viterbo, Emanuele, "On Measures Of Information Theoretic Security," Information Theory Workshop (ITW), 2014 IEEE, pp. 309, 310, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970843 While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
Keywords: Australia; Cryptography; Entropy; Information theory; Probability; Vectors (ID#: 15-3535)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970843&isnumber=6970773
Iwamoto, Mitsugu; Omino, Tsukasa; Komano, Yuichi; Ohta, Kazuo, "A New Model Of Client-Server Communications Under Information Theoretic Security," Information Theory Workshop (ITW), 2014 IEEE, pp.511,515, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970884 A new model for a Client-Server Communication (CSC) system satisfying information theoretic security is proposed, and its fundamental properties are discussed. Our CSC allows n users to upload their respective messages to a server securely by using symmetric key encryptions with their own keys, and all ciphertexts are decrypted by the server. If we require all messages to be perfectly secure in CSC against the corrupted clients and adversaries without any keys, it is proved that a one time pad or more inefficient encryption must be used for each communication link between a client and the server. This means that, in order to realize more efficient CSC, it is necessary to leak out some information of each message. Based on these observations, we introduce a new model for such a secure CSC formally, and discuss its fundamental properties. In addition, we propose the optimal construction of CSC under several constraints on security parameters called security rates.
Keywords: Correlation; Cryptography; Educational institutions; Electronic mail; Protocols; Servers (ID#: 15-3536)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970884&isnumber=6970773
Bracher, Annina; Hof, Eran; Lapidoth, Amos, "Distributed Storage For Data Security," Information Theory Workshop (ITW), 2014 IEEE, pp.506, 510, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970883 We study the secrecy of a distributed storage system for passwords. The encoder, Alice, observes a length-n password and describes it using two hints, which she then stores in different locations. The legitimate receiver, Bob, observes both hints. In one scenario we require that the number of guesses it takes Bob to guess the password approach 1 as n tends to infinity and in the other that the size of the list that Bob must form to guarantee that it contain the password approach 1. The eavesdropper, Eve, sees only one of the hints; Alice cannot control which. For each scenario we characterize the largest normalized (by n) exponent that we can guarantee for the number of guesses it takes Eve to guess the password.
Keywords: Blogs; Encoding; Entropy; Equations; Receivers; Stochastic processes; Upper bound (ID#: 15-3537)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970883&isnumber=6970773
Zamir, Ram, "How to Design An Efficient Lattice Coding Scheme," Information Theory Workshop (ITW), 2014 IEEE , vol., no., pp.1,4, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970780 Lattice codes find applications in various digital communications settings, including shaping for power-constrained channels, coding with side information (dirty-paper channel, Wyner-Ziv source), and Gaussian networks. In this paper we deal neither with the construction of a good lattice, nor with algorithms for lattice coding and decoding, but with other elements of a lattice coding system. We shall consider (1) the two roles of the fundamental cell of the shaping lattice; (2) efficient mappings from information bits to a lattice point; (3) the loss due to a finite alphabet in construction-A lattices; (4) randomization with a simple dither; and (5) how to incorporate a multi-dimensional lattice into a sequential (feedback) scheme. While these are not new issues and observations, they seem to be somewhat overlooked or hidden inside the rich literature about lattice codes.
Keywords: Decoding; Encoding; Lattices; Modulation; Quantization (signal); Vectors; construction A;dither; lattice encoding and decoding; modulo-lattice; prediction and equalization (ID#: 15-3538)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970780&isnumber=6970773
Nazer, Bobak; Gastpar, Michael, "Compute-and-Forward For Discrete Memoryless Networks," Information Theory Workshop (ITW), 2014 IEEE, pp.5,9, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970781 Consider a receiver that observes multiple interfering codewords. The compute-and-forward technique makes it possible for the receiver to directly decode linear combinations of the codewords. Previous work has focused on compute-and-forward for linear Gaussian networks. This paper explores the corresponding technique for discrete memoryless networks. As a by-product, this leads to a novel way of attaining non-trivial points on the dominant face of the capacity region of discrete memoryless multiple-access channels.
Keywords: Decoding; Interference channels; Linear codes; Receivers; Transmitters; Vectors (ID#: 15-3539)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970781&isnumber=6970773
Zewail, Ahmed A.; Yener, Aylin, "The Multiple Access Channel With An Untrusted Relay," Information Theory Workshop (ITW), 2014 IEEE, pp.25,29, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970785 This paper considers a Gaussian multiple access channel aided by a relay. Specifically, the relay facilitates communication between multiple sources and a destination to which the sources have no direct link. In this set up, the relay node is considered to be untrusted, i.e., honest but curious, from whom the source messages need to be kept secret. We identify an achievable secrecy rate region utilizing cooperative jamming from the destination, and using compress-and-forward at the relay. Additionally, an outer bound on the secrecy rate region is derived. Numerical results indicate that the outer bound is tight in some cases of interest.
Keywords: Jamming; Receivers; Relays; Upper bound; Wireless communication; Zinc; Zirconium (ID#: 15-3540)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970785&isnumber=6970773
Che, Pak Hou; Bakshi, Mayank; Chan, Chung; Jaggi, Sidharth, "Reliable Deniable Communication With Channel Uncertainty," Information Theory Workshop (ITW), 2014 IEEE, pp.30,34, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970786 Alice wishes to potentially communicate with Bob over a compound Binary Symmetric Channel while Willie listens in over a compound Binary Symmetric Channel that is noisier than Bob's. The channel noise parameters for both Bob and Willie are drawn according to uniform distribution over a range, but none of the three parties know their exact values. Willie's goal is to infer whether or not Alice is communicating with Bob. We show that Alice can send her messages reliably to Bob while ensuring that even whether or not she is actively communicating is deniable to Willie. We find the best rate at which Alice can communicate both deniably and reliably using Shannon's random coding and prove a converse.
Keywords: Decoding; Noise; Reliability theory; Standards; Uncertainty; Vectors (ID#: 15-3541)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970786&isnumber=6970773
Wang, Pengwei; Safavi-Naini, Reihaneh, "An Efficient Code For Adversarial Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE , vol., no., pp.40,44, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970788 In the (ρr, ρw)-adversarial wiretap (AWTP) channel model of [13], a codeword sent over the communication channel is corrupted by an adversary who observes a fraction ρr of the codeword, and adds noise to a fraction ρw of the codeword. The adversary is adaptive and chooses the subsets of observed and corrupted components, arbitrarily. In this paper we give the first efficient construction of a code family that provides perfect secrecy in this model, and achieves the secrecy capacity.
Keywords: Computational modeling; Decoding; Encoding; Reed-Solomon codes; Reliability; Security; Vectors (ID#: 15-3542)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970788&isnumber=6970773
Xiao, Zhiqing; Li, Yunzhou; Zhao, Ming; Wang, Jing, "Interactive Code To Correct And Detect Omniscient Byzantine Adversaries," Information Theory Workshop (ITW), 2014 IEEE, pp.45,49, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970789 This paper considers interactive transmissions in the presence of omniscient Byzantine attacks. Unlike prior papers, it is assumed that the number of transmissions, the number of erroneous transmissions therein, and the direction of each transmission are predetermined. Besides, the size of the alphabet in each transmission is unequal and predefined. Using these transmissions, two nodes communicate interactively to send a message. In this model, both attack strategies and coding bounds are considered. Although the codebook can not fully describe the interactive code, we still assert the existence of successful attack strategies according to the relations between codewords in the codebook. Furthermore, to ensure that the code is able to detect or correct a given number of transmission errors, upper bounds on the size of code are derived. Finally, the tightness of the bounds is discussed.
Keywords: Decoding; Educational institutions; Encoding; Error correction; Error correction codes; Indexes; Upper bound (ID#: 15-3543)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970789&isnumber=6970773
Tebbi, M.Ali; Chan, Terence H.; Sung, Chi Wan, "Linear Programming Bounds For Robust Locally Repairable Storage Codes," Information Theory Workshop (ITW), 2014 IEEE, pp.50,54, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970790 Locally repairable codes are used in distributed storage networks to minimise the number of survived nodes required to repair a failed node. However, the robustness of these codes is a main concern since locally repair procedure may fail when there are multiple node failures. This paper proposes a new class of robust locally repairable codes which guarantees that a failed node can be repaired locally even when there are multiple node failures. Upper bound on the size of robust locally repairable codes using linear programming tools are obtained and examples of robust locally repairable codes attaining these bounds are constructed.
Keywords: Generators; Linear codes; Linear programming; Maintenance engineering; Parity check codes; Robustness; Upper bound (ID#: 15-3544)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970790&isnumber=6970773
Datta, Anwitaman, "Locally Repairable RapidRAID Systematic Codes — One Simple Convoluted Way To Get It All," Information Theory Workshop (ITW), 2014 IEEE, pp. 60, 64, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970792 The need to store humongous volumes of data has regurgitated the study of erasure codes, so that reliable fault-tolerant distributed (for scaling out) data stores can be built while keeping the overheads low. In the context of storage codes, one of the most vigorously researched aspect in the last half a decade or so is their repairability - which looks into mechanisms to rebuild the data at a new storage node, to substitute the loss of information when an existing node fails. Desirable (sometimes mutually conflicting or reinforcing) repairability properties include reduction in the volume of I/O operations, minimize bandwidth usage, fast repairs, reduction in the number of live nodes to be contacted to carry out a repair (repair locality), repairing multiple failures simultaneously, etc.
Keywords: Convolutional codes; Distributed databases; Encoding; Maintenance engineering; Redundancy; Systematics; Convolutional Codes; Distributed Data Stores; Erasure Codes; Local Repairability; RapidRAID (ID#: 15-3545)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970792&isnumber=6970773
Sprintson, Alex, "Reductions Techniques For Establishing Equivalence Between Different Classes Of Network And Index Coding Problems," Information Theory Workshop (ITW), 2014 IEEE, pp.75, 76, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970795 Reductions, or transformations of one problem to the other, are a fundamental tool in complexity theory used for establishing the hardness of discrete optimization problems. Recently, there is a significant interest in using reductions for establishing relationships between different classes of problems related to network coding, index coding, and matroid theory. The goal of this paper is to survey the basic reduction techniques for proving equivalence between network coding and index coding, as well as the establishing relations between the index coding problem and the problem of finding a linear representation of a matroid. The paper reviews recent advances in the area and discusses open research problems.
Keywords: Indexes; Interference; Linear codes; Network coding; Vectors (ID#: 15-3546)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970795&isnumber=6970773
Xiang, Yu; Kim, Young-Han, "A Few Meta-Theorems In Network Information Theory," Information Theory Workshop (ITW), 2014 IEEE, pp. 77, 81, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970796 This paper reviews the relationship among several notions of capacity regions of a general discrete memoryless network under different code classes and performance criteria, such as average vs. maximal or block vs. bit error probabilities and deterministic vs. randomized codes. Applications of these meta-theorems include several structural results on capacity regions and a simple proof of the network equivalence theorem.
Keywords: Capacity planning; Channel coding; Decoding; Digital TV; Error probability; Manganese; Monte Carlo methods (ID#: 15-3547)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970796&isnumber=6970773
Effros, Michelle; Langberg, Michael, "Is There A Canonical Network For Network Information theory?," Information Theory Workshop (ITW), 2014 IEEE, pp. 82, 86, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970797 In recent years, work has begun to emerge demonstrating intriguing relationships between seemingly disparate information theoretic problems. For example, recent results establish powerful ties between solutions for networks of memoryless channels and networks of noiseless links (network coding networks), between network coding networks in which every internal node can code and a particular subset of network coding networks in which only a single internal node can code (index coding networks), and between multiple multicast demands on memoryless networks and multiple unicast demands on memoryless networks. While the results vary widely, together, they hint at the potential for a unifying theory. In this work, we consider one possible framework for such a theory. Inspired by ideas from the field of computational complexity theory, the proposed framework generalizes definitions and techniques for reduction, completeness, and approximation to the information theoretic domain. One possible outcome from such a theory is a taxonomy of information theoretic problems where problems in the same taxonomic class share similar properties in terms of their code designs, capacities, or other forms of solution. Another potential outcome is the identification of small classes of network information theoretic problems whose solutions, were they available, would solve all information theoretic problems in a much larger class. A third potential outcome is the development of techniques by which approximate solution for one family of network information theoretic problems can be obtained from precise or approximate solution of another family of networks.
Keywords: Approximation methods; Complexity theory; Encoding; Indexes; Network coding; Unicast (ID#: 15-3548)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970797&isnumber=6970773
Mirghasemi, Hamed; Belfiore, Jean-Claude, "The Semantic Secrecy Rate Of The Lattice Gaussian Coding For The Gaussian Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE, pp.112, 116, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970803 In this paper, we investigate the achievable semantic secrecy rate of existing lattice coding schemes, proposed in [6], for both the mod-Λ Gaussian wiretap and the Gaussian wiretap channels. For both channels, we propose new upper bounds on the amount of leaked information which provide milder sufficient conditions to achieve semantic secrecy. These upper bounds show that the lattice coding schemes in [6] can achieve the secrecy capacity to within ½ln e/2 nat for the mod-Λ Gaussian and to within ½(1 − ln (1 + SNRe over SNRe+1)) nat for the Gaussian wiretap channels where SNRe is the signal-to-noise ratio of Eve.
Keywords: Encoding; Gaussian distribution; Lattices; Security; Semantics; Upper bound; Zinc (ID#: 15-3549)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970803&isnumber=6970773
Hou, Xiaolu; Lin, Fuchun; Oggier, Frederique, "Construction and Secrecy Gain Of A Family Of 5-Modular Lattices," Information Theory Workshop (ITW), 2014 IEEEpp.117,121, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970804 The secrecy gain of a lattice is a lattice invariant used to characterize wiretap lattice codes for Gaussian channels. The secrecy gain has been classified for unimodular lattices up to dimension 23, and so far, a few sparse examples are known for l-modular lattices, with l = 2, 3. We propose some constructions of 5-modular lattices via the Construction A of lattices from linear codes, and study the secrecy gain of the resulting lattices.
Keywords: Educational institutions; Electronic mail; Generators; Lattices; Linear codes; Vectors; Zinc (ID#: 15-3550)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970804&isnumber=6970773
Sala, Frederic; Gabrys, Ryan; Dolecek, Lara, "Gilbert-Varshamov-like Lower Bounds For Deletion-Correcting Codes," Information Theory Workshop (ITW), 2014 IEEE, pp. 147, 151, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970810 The development of good codes which are capable of correcting more than a single deletion remains an elusive task. Recent papers, such as that by Kulkarni and Kiyavash [3], instead focus on the more tractable problem of deriving upper bounds on the cardinalities of such codes. In the present work, we develop Gilbert-Varshamov-type lower bounds on the cardinalities of deletion-correcting codes. Our approach is based on the application of results from extremal graph theory. We give several bounds for the cases of binary and non-binary single- and multiple-error correcting codes. We introduce a bound that is, to the best of our knowledge, the strongest existing lower bound on the sizes of deletion-correcting codes. Our work also reveals some structural properties of the underlying Levenshtein graph.
Keywords: Binary codes; Context; Encoding; Graph theory; Indexes; Optimization; Upper bound; Extremal graph theory; Gilbert-Varshamov Bound; Insertions and deletions; Lower bounds for codes (ID#: 15-3551)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970810&isnumber=6970773
Xie, Yixuan; Yuan, Jinhong; Fujiwara, Yuichiro, "Quantum Synchronizable Codes From Quadratic Residue Codes And Their Supercodes," Information Theory Workshop (ITW), 2014 IEEE, pp.172,176, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970815 Quantum synchronizable codes are quantum error-correcting codes designed to correct the effects of both quantum noise and block synchronization errors. While it is known that quantum synchronizable codes can be constructed from cyclic codes that satisfy special properties, only a few classes of cyclic codes have been proved to give promising quantum synchronizable codes. In this paper, using quadratic residue codes and their supercodes, we give a simple construction for quantum synchronizable codes whose synchronization capabilities attain the upper bound. The method is applicable to cyclic codes of prime length.
Keywords: Encoding; Error correction codes; Generators; Polynomials; Quantum mechanics; Synchronization (ID#: 15-3552)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970815&isnumber=6970773
Che, Pak Hou; Kadhe, Swanand; Bakshi, Mayank; Chan, Chung; Jaggi, Sidharth; Sprintson, Alex, "Reliable, Deniable And Hidable Communication: A Quick Survey," Information Theory Workshop (ITW), 2014 IEEE, pp.227,231, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970826 We survey here recent work pertaining to “deniable” communication - i.e., talking without being detected. We first highlight connections to other related notions (anonymity and secrecy). We then contrast the notions of deniability and secrecy. We highlight similarities and distinctions of deniability with a variety of related notions (LPD communications, stealth, channel resolvability) extant in the literature.
Keywords: Cryptography; Noise; Reliability theory; Throughput (ID#: 15-3553)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970826&isnumber=6970773
Thangaraj, Andrew, "Coding for Wiretap Channels: Channel Resolvability And Semantic Security," Information Theory Workshop (ITW), 2014 IEEE, pp.232, 236, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970827 Wiretap channels form the most basic building block of physical-layer and information-theoretic security. Considerable research work has gone into the information-theoretic, cryptographic and coding aspects of wiretap channels in the last few years. The main goal of this tutorial article is to provide a self-contained presentation of two recent results - one is a new and simplified proof for secrecy capacity using channel resolvability, and the other is the connection between semantic security and information-theoretic strong secrecy.
Keywords: Cryptography; Encoding; Semantics; Standards; Zinc (ID#: 15-3554)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970827&isnumber=6970773
adhan, Parth; Venkitasubramaniam, Parv, "Under the Radar Attacks In Dynamical Systems: Adversarial Privacy Utility Tradeoffs," Information Theory Workshop (ITW), 2014 IEEEpp.242,246, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970829 Cyber physical systems which integrate physical system dynamics with digital cyber infrastructure are envisioned to transform our core infrastructural frameworks such as the smart electricity grid, transportation networks and advanced manufacturing. This integration however exposes the physical system functioning to the security vulnerabilities of cyber communication. Both scientific studies and real world examples have demonstrated the impact of data injection attacks on state estimation mechanisms on the smart electricity grid. In this work, an abstract theoretical framework is proposed to study data injection/modification attacks on Markov modeled dynamical systems from the perspective of an adversary. Typical data injection attacks focus on one shot attacks by adversary and the non-detectability of such attacks under static assumptions. In this work we study dynamic data injection attacks where the adversary is capable of modifying a temporal sequence of data and the physical controller is equipped with prior statistical knowledge about the data arrival process to detect the presence of an adversary. The goal of the adversary is to modify the arrivals to minimize a utility function of the controller while minimizing the detectability of his presence as measured by the KL divergence between the prior and posterior distribution of the arriving data. Adversarial policies and tradeoffs between utility and detectability are characterized analytically using linearly solvable control optimization.
Keywords: Markov processes; Mathematical model; Power system dynamics ;Privacy; Process control; Smart grids; State estimation (ID#: 15-3555)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970829&isnumber=6970773
Kosut, Oliver; Kao, Li-Wei, "On Generalized Active Attacks By Causal Adversaries In Networks," Information Theory Workshop (ITW), 2014 IEEE,, pp.247,251, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970830 Active attacks are studied on noise-free graphical multicast networks. A malicious adversary may enter the network and arbitrarily corrupt transmissions. A very general model is adopted for the scope of attack: a collection of sets of edges is specified, and the adversary may control any one set of edges in this collection. The adversary is assumed to be omniscient but causal, such that the adversary is forced to decide on transmissions before knowing random choices by the honest nodes. Four main results are presented. First, a precise characterization of whether any positive rate can be achieved. Second, a simple erasure upper bound. Third, an achievable bound wherein random hashes are generated and distributed, so that nodes in the network can filter out adversarial corruption. Finally, an example network is presented that has capacity strictly between the general upper and lower bounds.
Keywords: Artificial neural networks; Decoding; Encoding; Error correction; Network coding; Upper bound; Vectors (ID#: 15-3556)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970830&isnumber=6970773
Wijewardhana, U.L.; Codreanu, M., "Sparse Bayesian Learning Approach For Streaming Signal Recovery," Information Theory Workshop (ITW), 2014 IEEE, pp.302,306, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970841 We discuss the reconstruction of streaming signals from compressive measurements. We propose to use an algorithm based on sparse Bayesian learning to reconstruct the streaming signal over small shifting intervals. The proposed algorithm utilizes the previous estimates to improve the accuracy of the signal estimate and the speed of the recovery algorithm. Simulation results show that the proposed algorithm can achieve better signal-to-error ratios compared with the existing l1-homotopy based recovery algorithm.
Keywords: Bayes methods; Compressed sensing; Noise measurement; Signal to noise ratio; Transforms; Vectors; Compressive sensing; recursive methods; sparse Bayesian learning; streaming signals (ID#: 15-3557)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970841&isnumber=6970773
Belfiore, Jean-Claude, "Codes for Wireless Wiretap Channels," Information Theory Workshop (ITW), 2014 IEEE , vol., no., pp.307,308, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970842 In this presentation we are interested in coding for wiretap wireless channels. First part is devoted to design criteria, second part will deal with the codes themselves. Nested lattices are the main ingredients to be used. We start with the Gaussian wiretap channel where it is shown that theta series have to be minimized. Then we give some ideas in the case of fading wiretap channels. In part two, we give some results that help finding good lattice codes of moderate and high length. Part of this work was supported by FP7 project PHYLAWS (EU FP7-ICT 317562).
Keywords: Encoding; Fading; Lattices; Measurement; Noise; Vectors; Wireless communication} (ID#: 15-3558)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970842&isnumber=6970773
Choo, Li-Chia; Ling, Cong, "Superposition Lattice Coding For Gaussian Broadcast Channel With Confidential message," Information Theory Workshop (ITW), 2014 IEEE, pp. 311, 315, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970844 In this paper, we propose superposition coding based on the lattice Gaussian distribution to achieve strong secrecy over the Gaussian broadcast channel with one confidential message, with a constant gap to the secrecy capacity (only for the confidential message). The proposed superposition lattice code consists of a lattice Gaussian code for the Gaussian noise and a wiretap lattice code with strong secrecy. The flatness factor is used to analyze the error probability, information leakage and achievable rates. By removing the secrecy coding, we can modify our scheme to achieve the capacity of the Gaussian broadcast channel with one common and one private message without the secrecy constraint.
Keywords: Decoding; Encoding; Error probability; Gaussian distribution; Lattices; Noise; Vectors (ID#: 15-3559)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970844&isnumber=6970773
Lu, Jinlong; Harshan, J.; Oggier, Frederique, "A USRP Implementation Of Wiretap Lattice Codes," Information Theory Workshop (ITW), 2014 IEEE, pp. 316, 320, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970845 A wiretap channel models a communication channel between a legitimate sender Alice and a legitimate receiver Bob in the presence of an eavesdropper Eve. Confidentiality between Alice and Bob is obtained using wiretap codes, which exploit the difference between the channels to Bob and to Eve. This paper discusses a first implementation of wiretap lattice codes using USRP (Universal Software Radio Peripheral), which focuses on the channel between Alice and Eve. Benefits of coset encoding for Eve's confusion are observed, using different lattice codes in small dimensions, and varying the position of the eavesdropper.
Keywords: Baseband; Decoding; Encoding; Lattices; Receivers; Security; Signal to noise ratio (ID#: 15-3560)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970845&isnumber=6970773
Ng, Derrick Wing Kwan; Schober, Robert, "Max-Min Fair Wireless Energy Transfer For Secure Multiuser Communication Systems," Information Theory Workshop (ITW), 2014 IEEE , vol., no., pp.326,330, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970847 This paper considers max-min fairness for wireless energy transfer in a downlink multiuser communication system. Our resource allocation design maximizes the minimum harvested energy among multiple multiple-antenna energy harvesting receivers (potential eavesdroppers) while providing quality of service (QoS) for secure communication to multiple single-antenna information receivers. In particular, the algorithm design is formulated as a non-convex optimization problem which takes into account a minimum required signal-to-interference-plus-noise ratio (SINR) constraint at the information receivers and a constraint on the maximum tolerable channel capacity achieved by the energy harvesting receivers for a given transmit power budget. The proposed problem formulation exploits the dual use of artificial noise generation for facilitating efficient wireless energy transfer and secure communication. A semidefinite programming (SDP) relaxation approach is exploited to obtain a global optimal solution of the considered problem. Simulation results demonstrate the significant performance gain in harvested energy that is achieved by the proposed optimal scheme compared to two simple baseline schemes.
Keywords: Energy harvesting; Interference; Noise; Optimization; Receivers; Resource management; Wireless communication (ID#: 15-3561)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970847&isnumber=6970773
Xie, Jianwei; Ulukus, Sennur, "Secure Degrees Of Freedom Region Of The Gaussian Interference Channel With Secrecy Constraints," Information Theory Workshop (ITW), 2014 IEEE, pp.361,365, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970854 The sum secure degrees of freedom (s.d.o.f.) of the K-user interference channel (IC) with secrecy constraints has been determined recently as equation [1], [2]. In this paper, we determine the entire s.d.o.f. region of this channel model. The converse includes constraints both due to secrecy as well as due to interference. Although the portion of the region close to the optimum sum s.d.o.f. point is governed by the upper bounds due to secrecy constraints, the other portions of the region are governed by the upper bounds due to interference constraints. Different from the existing literature, in order to fully understand the characterization of the s.d.o.f. region of the IC, one has to study the 4-user case, i.e., the 2 or 3-user cases do not illustrate the generality of the problem. In order to prove the achievability, we use the polytope structure of the converse region. The extreme points of the converse region are achieved by a (K − m)-user IC with confidential messages, m helpers, and N external eavesdroppers, for m ≥ 1 and a finite N. A byproduct of our results in this paper is that the sum s.d.o.f. is achieved only at one extreme point of the s.d.o.f. region, which is the symmetric-rate extreme point.
Keywords: Integrated circuits; Interference channels; Noise; Receivers; Transmitters; Upper bound (ID#: 15-3562)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970854&isnumber=6970773
Guang, Xuan; Lu, Jiyong; Fu, Fang-Wei, "Locality-Preserving Secure Network Coding," Information Theory Workshop (ITW), 2014 IEEEpp.396,400, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970861 In the paradigm of network coding, when wiretapping attacks occur, secure network coding is introduced to prevent information leaking adversaries. In practical network communications, the source often multicasts messages at several different rates within a session. How to deal with information transmission and information security simultaneously under variable rates and fixed security-level is introduced in this paper as a variable-rate and fixed-security-level secure network coding problem. In order to solve this problem effectively, we propose the concept of locality-preserving secure linear network codes of different rates and fixed security-level, which have the same local encoding kernel at each internal node. We further present an approach to construct such a family of secure linear network codes and give an algorithm for efficient implementation. This approach saves the storage space for both source node and internal nodes, and resources and time on networks. Finally, the performance of the proposed algorithm is analyzed, including the field size, computational and storage complexities.
Keywords: Complexity theory; Decoding; Encoding; Information rates; Kernel; Network coding; Vectors (ID#: 15-3563)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970861&isnumber=6970773
Dai, Bin; Ma, Zheng, "Feedback Enhances The Security Of Degraded Broadcast Channels With Confidential Messages And Causal Channel State Information," Information Theory Workshop (ITW), 2014 IEEE, pp.411,415, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970864 In this paper, we investigate the degraded broadcast channels with confidential messages (DBC-CM), causal channel state information (CSI), and with or without noiseless feedback. The inner and outer bounds on the capacity-equivocation region are given for the non-feedback mode, and the capacity-equivocation region is determined for the feedback model. We find that by using this noiseless feedback, the achievable rate-equivocation region (inner bound on the capacity-equivocation region) of the DBC-CM with causal CSI is enhanced.
Keywords: Decoding; Joints; Random variables; Receivers; Silicon; Transmitters; Zinc; Broadcast channel; channel state information; confidential message; feedback (ID#: 15-3564)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970864&isnumber=6970773
Lang, Fei; Deng, Zhixiang; Wang, Bao-Yun, "Secure Communication Of Correlated Sources Over Broadcast Channels," Information Theory Workshop (ITW), 2014 IEEE, pp.416, 420, 2-5 Nov. 2014
doi: 10.1109/ITW.2014.6970865 Broadcast channels with correlated sources are considered from a joint source-channel coding perspective, where each receiver is kept in ignorance of the source intended for the other receiver. This setting can be seen as a generalization of Han-Costa's broadcast channel with correlated sources under additional secrecy constraints on both receivers. General outer and inner bounds for this reliable and secure communication are determined. The joint source-channel coding is proved to be optimal for two special cases, including the sources satisfying a certain Markov property sent over semi-deterministic broadcast channels, and arbitrary correlated sources sent over less-noisy broadcast channels.
Keywords: Decoding; Educational institutions; Encoding; Joints; Markov processes; Receivers; Reliability (ID#: 15-3565)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970865&isnumber=6970773
Benammar, Meryem; Piantanida, Pablo, "On the secrecy capacity region of the Wiretap Broadcast Channel," Information Theory Workshop (ITW), 2014 IEEE, pp.421,425, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970866 This work investigates the secrecy capacity region of the Wiretap Broadcast Channel (WBC) where an encoder communicates two private messages over a Broadcast Channel (BC) while keeping both messages secret from the eavesdropper. Our main result is the derivation of a novel outer bound and an inner bound on the secrecy capacity region of this setting. These results allow us to characterize the capacity region for three non-degraded classes of WBCs: the deterministic and the semi-deterministic WBC with a more noisy eavesdropper, and the WBC when users exhibit less noisiness order between them.
Keywords: Decoding; Encoding; Noise measurement; Receivers; Standards; Zinc (ID#: 15-3566)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970866&isnumber=6970773
Mansour, Ahmed S.; Schaefer, Rafael F.; Boche, Holger, "Secrecy Measures For Broadcast Channels With Receiver Side Information: Joint Vs Individual," Information Theory Workshop (ITW), 2014 IEEE , vol., no., pp.426,430, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970867 We study the transmission of a common message and three confidential messages over a broadcast channel with two legitimate receivers and an eavesdropper. Each legitimate receiver is interested in decoding two of the three confidential messages, while having the third one as side information. In order to measure the ignorance of the eavesdropper about the confidential messages, we investigate two different secrecy criteria: joint secrecy and individual secrecy. For both criteria, we provide a general achievable rate region. We establish both the joint and individual secrecy capacity if the two legitimate receivers are less noisy than the eavesdropper. We further investigate the scenario where the eavesdropper is less noisy than the two legitimate receivers. It is known that the joint secrecy constraints can not be fulfilled under this scenario, however, we manage to establish a non vanishing capacity region for the individual secrecy case.
Keywords: Decoding; Encoding; Joints; Markov processes; Noise measurement; Receivers; Reliability (ID#: 15-3567)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970867&isnumber=6970773
Data, Deepesh; Dey, Bikash K.; Mishra, Manoj; Prabhakaran, Vinod M., "How to Securely Compute The Modulo-Two Sum Of Binary Sources," Information Theory Workshop (ITW), 2014 IEEE, pp.496,500, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970881In secure multiparty computation, mutually distrusting users in a network want to collaborate to compute functions of data which is distributed among the users. The users should not learn any additional information about the data of others than what they may infer from their own data and the functions they are computing. Previous works have mostly considered the worst case context (i.e., without assuming any distribution for the data); Lee and Abbe (2014) is a notable exception. Here, we study the average case (i.e., we work with a distribution on the data) where correctness and privacy is only desired asymptotically. For concreteness and simplicity, we consider a secure version of the function computation problem of Körner and Marton (1979) where two users observe a doubly symmetric binary source with parameter p and the third user wants to compute the XOR. We show that the amount of communication and randomness resources required depends on the level of correctness desired. When zero-error and perfect privacy are required, the results of Data et al. (2014) show that it can be achieved if and only if a total rate of 1 bit is communicated between every pair of users and private randomness at the rate of 1 is used up. In contrast, we show here that, if we only want the probability of error to vanish asymptotically in blocklength, it can be achieved by a lower rate (binary entropy of p) for all the links and for private randomness; this also guarantees perfect privacy. We also show that no smaller rates are possible even if privacy is only required asymptotically.
Keywords: Data privacy; Distributed databases; Privacy; Protocols; Random variables; Vectors; Zinc (ID#: 15-3568)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970881&isnumber=6970773
Wang, Yongge; Desmedt, Yvo, "Efficient Secret Sharing Schemes Achieving Optimal Information Rate," Information Theory Workshop (ITW), 2014 IEEE, pp.516,520, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970885 One of the important problems in secret sharing schemes is to establish bounds on the size of the shares to be given to participants in secret sharing schemes. The other important problem in secret sharing schemes is to reduce the computational complexity in both secret distribution phase and secret reconstruction phase. In this paper, we design efficient threshold (n, k) secret sharing schemes to achieve both of the above goals. In particular, we show that if the secret size |s| is larger than max{1 + log2 n, n(n − k)/(n − 1)}, then ideal secret sharing schemes exist. In the efficient ideal secret sharing schemes that we will construct, only XOR-operations on binary strings are required (which is the best we could achieve). These schemes will have many applications both in practice and in theory. For example, they could be used to design very efficient verifiable secret sharing schemes which will have broad applications in secure multi-party computation and could be used to design efficient privacy preserving data storage in cloud systems.
Keywords: Arrays; Cryptography; Generators; Information rates; Polynomials; Reed-Solomon codes (ID#: 15-3569)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970885&isnumber=6970773
Wang, Ye; Ishwar, Prakash; Rane, Shantanu, "An Elementary Completeness Proof For Secure Two-Party Computation primitives," Information Theory Workshop (ITW), 2014 IEEE, pp.521, 525, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970886 In the secure two-party computation problem, two parties wish to compute a (possibly randomized) function of their inputs via an interactive protocol, while ensuring that neither party learns more than what can be inferred from only their own input and output. For semi-honest parties and information-theoretic security guarantees, it is well-known that, if only noise-less communication is available, only a limited set of functions can be securely computed; however, if interaction is also allowed over general communication primitives (multi-input/output channels), there are “complete” primitives that enable any function to be securely computed. The general set of complete primitives was characterized recently by Maji, Prabhakaran, and Rosulek leveraging an earlier specialized characterization by Kilian. Our contribution in this paper is a simple, self-contained, alternative derivation using elementary information-theoretic tools.
Keywords: Joints; Markov processes; Mutual information; Protocols; Random variables; Redundancy; Security (ID#: 15-3570)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970886&isnumber=6970773
Nafea, Mohamed; Yener, Aylin, "Secure Degrees Of Freedom For The MIMO Wiretap Channel With A Multiantenna Cooperative Jammer," Information Theory Workshop (ITW), 2014 IEEE, pp.626,630, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970907 A multiple antenna Gaussian wiretap channel with a multiantenna cooperative jammer (CJ) is considered and the secure degrees of freedom (s.d.o.f.), with N antennas at the sender, receiver, and eavesdropper, is derived for all possible values of the number of antennas at the cooperative jammer, K. In particular, the upper and lower bounds for the s.d.o.f. are provided for different ranges of K and shown to coincide. Gaussian signaling both for transmission and jamming is shown to be sufficient to achieve the s.d.o.f. of the channel, when the s.d.o.f. is integer-valued. By contrast, when the channel has a non-integer s.d.o.f., structured signaling and joint signal space and signal scale alignment are employed to achieve the s.d.o.f.
Keywords: Jamming; Receiving antennas; Transmitters; Upper bound; Zinc; Zirconium (ID#: 15-3571)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970907&isnumber=6970773
Liu, Shuiyin; Hong, Yi; Viterbo, Emanuele, "Unshared Secret Key Cryptography: Achieving Shannon's Ideal Secrecy And Perfect Secrecy," Information Theory Workshop (ITW), 2014 IEEE, pp.636, 640, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970909 In cryptography, a shared secret key is normally mandatory to encrypt the confidential message. In this work, we propose the unshared secret key (USK) cryptosystem. Inspired by the artificial noise (AN) technique, we align a one-time pad (OTP) secret key within the null space of a multipleoutput multiple-input (MIMO) channel between transmitter and legitimate receiver, so that the OTP is not needed by the legitimate receiver to decipher, while it is fully affecting the eavesdropper's ability to decipher the confidential message. We show that the USK cryptosystem guarantees Shannon's ideal secrecy and perfect secrecy, if an infinite lattice input alphabet is used.
Keywords: Cryptography; Lattices; Niobium; Noise; Receivers; Vectors (ID#: 15-3572)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970909&isnumber=6970773
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: IEEE Security and Privacy Workshop (2014) |
The 2014 IEEE Security and Privacy Workshops were held 17-18 May 2014 in San Jose, California. Workshop subjects included insider threats, language-theoretic security, cyber crime, ethics, and data usage management.
Redfield, Catherine M.S.; Date, Hiroyuki, "Gringotts: Securing Data for Digital Evidence," Security and Privacy Workshops (SPW), 2014 IEEE, pp.10, 17, 17-18 May 2014. doi: 10.1109/SPW.2014.11 As digital storage and cloud processing become more common in business infrastructure and security systems, maintaining the provable integrity of accumulated institutional data that may be required as legal evidence also increases in complexity. Since data owners may have an interest in a proposed lawsuit, it is essential that any digital evidence be guaranteed against both outside attacks and internal tampering. Since the timescale required for legal disputes is unrelated to computational and mathematical advances, evidential data integrity must be maintained even after the cryptography that originally protected it becomes obsolete. In this paper we propose Gringotts, a system where data is signed on the device that generates it, transmitted from multiple sources to a server using a novel signature scheme, and stored with its signature on a database running Evidence Record Syntax, a protocol for long-term archival systems that maintains the data integrity of the signature, even over the course of changing cryptographic practices. Our proof of concept for a small surveillance camera network had a processing (throughput) overhead of 7.5%, and a storage overhead of 6.2%.
Keywords: Cameras; Cryptography; Databases; Protocols; Receivers; Servers; Digital Evidence; Digital Signatures; Evidence Record Syntax; Long-Term Authenticity; Stream Data (ID#: 15-3445)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957278&isnumber=6957265
Iyilade, Johnson; Vassileva, Julita, "P2U: A Privacy Policy Specification Language for Secondary Data Sharing and Usage," Security and Privacy Workshops (SPW), 2014 IEEE, pp.18, 22, 17-18 May 2014. doi: 10.1109/SPW.2014.12 Within the last decade, there are growing economic social incentives and opportunities for secondary use of data in many sectors, and strong market forces currently drive the active development of systems that aggregate user data gathered by many sources. This secondary use of data poses privacy threats due to unwanted use of data for the wrong purposes such as discriminating the user for employment, loan and insurance. Traditional privacy policy languages such as the Platform for Privacy Preferences (P3P) are inadequate since they were designed long before many of these technologies were invented and basically focus on enabling user-awareness and control during primary data collection (e.g. by a website). However, with the advent of Web 2.0 and Social Networking Sites, the landscape of privacy is shifting from limiting collection of data by websites to ensuring ethical use of the data after initial collection. To meet the current challenges of privacy protection in secondary context, we propose a privacy policy language, Purpose-to-Use (P2U), aimed at enforcing privacy while enabling secondary user information sharing across applications, devices, and services on the Web.Keywords: Context; Data privacy; Economics; Information management; Mobile communication; Organizations; Privacy; Policy Languages; Privacy; Secondary Use; Usage Control (ID#: 15-3446)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957279&isnumber=6957265
Lazouski, Aliaksandr; Mancini, Gaetano; Martinelli, Fabio; Mori, Paolo, "Architecture, Workflows, and Prototype for Stateful Data Usage Control in Cloud," Security and Privacy Workshops (SPW), 2014 IEEE, pp.23,30, 17-18 May 2014. doi: 10.1109/SPW.2014.13 This paper deals with the problem of continuous usage control of multiple copies of data objects in distributed systems. This work defines an architecture, a set of workflows, a set of policies and an implementation for the distributed enforcement. The policies, besides including access and usage rules, also specify the parties that will be involved in the decision process. Indeed, the enforcement requires collaboration of several entities because the access decision might be evaluated on one site, enforced on another, and the attributes needed for the policy evaluation might be stored in many distributed locations.
Keywords: Authorization; Concurrent computing ;Data models; Distributed databases; Process control; Resource management; Attributes; Cloud System; Concurrency Control; UCON; Usage Control (ID#: 15-3447)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957280&isnumber=6957265
Wohlgemuth, Sven, "Resilience as a New Enforcement Model for IT Security Based on Usage Control," Security and Privacy Workshops (SPW), 2014 IEEE, pp.31,38, 17-18 May 2014. doi: 10.1109/SPW.2014.14 Security and privacy are not only general requirements of a society but also indispensable enablers for innovative IT infrastructure applications aiming at increased, sustainable welfare and safety of a society. A critical activity of these IT applications is spontaneous information exchange. This information exchange, however, creates inevitable, unknown dependencies between the participating IT systems, which, in turn threaten security and privacy. With the current approach to IT security, security and privacy follow changes and incidents rather than anticipating them. By sticking to a given threat model, the current approach fails to consider vulnerabilities which arise during a spontaneous information exchange. With the goal of improving security and privacy, this work proposes adapting an IT security model and its enforcement to current and most probable incidents before they result in an unacceptable risk for the participating parties or failure of IT applications. Usage control is the suitable security policy model, since it allows changes during run-time without conceptually raising additional incidents.
Keywords: Adaptation models; Adaptive systems; Availability; Information exchange; Privacy; Resilience; Security; data provenance; identity management; resilience; security and privacy; usage control (ID#: 15-3448)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957281&isnumber=6957265
Lovat, Enrico; Kelbert, Florian, "Structure Matters - A New Approach for Data Flow Tracking," Security and Privacy Workshops (SPW), 2014 IEEE, pp.39,43, 17-18 May 2014. doi: 10.1109/SPW.2014.15 Usage control (UC) is concerned with how data may or may not be used after initial access has been granted. UC requirements are expressed in terms of data (e.g. a picture, a song) which exist within a system in forms of different technical representations (containers, e.g. files, memory locations, windows). A model combining UC enforcement with data flow tracking across containers has been proposed in the literature, but it exhibits a high false positives detection rate. In this paper we propose a refined approach for data flow tracking that mitigates this over approximation problem by leveraging information about the inherent structure of the data being tracked. We propose a formal model and show some exemplary instantiations.
Keywords: Containers; Data models; Discrete Fourier transforms; Operating systems; Postal services; Security; Semantics;data flow tracking; data structure; usage control (ID#: 15-3449)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957282&isnumber=6957265
Naveed, Muhammad, "Hurdles for Genomic Data Usage Management," Security and Privacy Workshops (SPW), 2014 IEEE, pp.44,48, 17-18 May 2014. doi: 10.1109/SPW.2014.44 Our genome determines our appearance, gender, diseases, reaction to drugs, and much more. It not only contains information about us but also about our relatives, past generations, and future generations. This creates many policy and technology challenges to protect privacy and manage usage of genomic data. In this paper, we identify various features of genomic data that make its usage management very challenging and different from other types of data. We also describe some ideas about potential solutions and propose some recommendations for the usage of genomic data.
Keywords: Bioinformatics; Cryptography; DNA; Data privacy; Genomics; Privacy; Sequential analysis (ID#: 15-3450)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957283&isnumber=6957265
Kang, Yuan J.; Schiffman, Allan M.; Shrager, Jeff, "RAPPD: A Language and Prototype for Recipient-Accountable Private Personal Data," Security and Privacy Workshops (SPW), 2014 IEEEpp.49,56, 17-18 May 2014. doi: 10.1109/SPW.2014.16 Often communicate private data in informal settings such as email, where we trust that the recipient shares our assumptions regarding the disposition of this data. Sometimes we informally express our desires in this regard, but there is no formal means in such settings to make our wishes explicit, nor to hold the recipient accountable. Here we describe a system and prototype implementation called Recipient-Accountable Private Personal Data, which lets the originator express his or her privacy desires regarding data transmitted in email, and provides some accountability. Our method only assumes that the recipient is reading the email online, and on an email reader that will execute HTML and JavaScript.
Keywords: Data privacy; Electronic mail; IP networks; Law; Medical services; Privacy; Prototypes; accountability; auditing; creative commons; email privacy; privacy; trust; usability; usable privacy (ID#: 15-3451)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957284&isnumber=6957265
Hanaei, Ebrahim Hamad Al; Rashid, Awais, "DF-C2M2: A Capability Maturity Model for Digital Forensics Organisations," Security and Privacy Workshops (SPW), 2014 IEEE, pp.57,60, 17-18 May 2014 doi: 10.1109/SPW.2014.17 The field of digital forensics has emerged as one of the fastest changing and most rapidly developing investigative specialisations in a wide range of criminal and civil cases. Increasingly there is a requirement from the various legal and judicial authorities throughout the world, that any digital evidence presented in criminal and civil cases should meet requirements regarding the acceptance and admissibility of digital evidence, e.g., Daubert or Frye in the US. There is also increasing expectation that digital forensics labs are accredited to ISO 17025 or the US equivalent ASCLD-Lab International requirements. On the one hand, these standards cover general requirements and are not geared specifically towards digital forensics. On the other hand, digital forensics labs are mostly left with costly piece-meal efforts in order to try and address such pressing legal and regulatory requirements. In this paper, we address these issues by proposing DF-C^2M^2, a capability maturity model that enables organisations to evaluate the maturity of their digital forensics capabilities and identify roadmaps for improving it in accordance with business or regulatory requirements. The model has been developed through consultations and interviews with digital forensics experts. The model has been evaluated by using it to assess the digital forensics capability maturity of a lab in a law enforcement agency.
Keywords: Capability maturity model; Conferences; Digital forensics; ISO standards; Law enforcement; ASCLD-Lab; Capability Maturity; Digital Forensics; ISO 17025 (ID#: 15-3452)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957285&isnumber=6957265
Hu, Xin; Wang, Ting; Stoecklin, Marc Ph.; Schales, Douglas L.; Jang, Jiyong; Sailer, Reiner, "Asset Risk Scoring in Enterprise Network with Mutually Reinforced Reputation Propagation," Security and Privacy Workshops (SPW), 2014 IEEE, pp.61,64, 17-18 May 2014. doi: 10.1109/SPW.2014.18 Cyber security attacks are becoming ever more frequent and sophisticated. Enterprises often deploy several security protection mechanisms, such as anti-virus software, intrusion detection prevention systems, and firewalls, to protect their critical assets against emerging threats. Unfortunately, these protection systems are typically "noisy", e.g., regularly generating thousands of alerts every day. Plagued by false positives and irrelevant events, it is often neither practical nor cost-effective to analyze and respond to every single alert. The main challenge faced by enterprises is to extract important information from the plethora of alerts and to infer potential risks to their critical assets. A better understanding of risks will facilitate effective resource allocation and prioritization of further investigation. In this paper, we present MUSE, a system that analyzes a large number of alerts and derives risk scores by correlating diverse entities in an enterprise network. Instead of considering a risk as an isolated and static property, MUSE models the dynamics of a risk based on the mutual reinforcement principle. We evaluate MUSE with real-world network traces and alerts from a large enterprise network, and demonstrate its efficacy in risk assessment and flexibility in incorporating a wide variety of data sets.
Keywords: Belief propagation; Bipartite graph; Data mining; Intrusion detection; Malware; Servers; Risk Scoring; mutually reinforced principles; reputation propagation (ID#: 15-3453)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957286&isnumber=6957265
Faria, Rubens Alexandre De; Fonseca, Keiko V.Ono; Schneider, Bertoldo; Nguang, Sing Kiong, "Collusion and Fraud Detection on Electronic Energy Meters - A Use Case of Forensics Investigation Procedures," Security and Privacy Workshops (SPW), 2014 IEEE, pp.65,68, 17-18 May 2014. doi: 10.1109/SPW.2014.19 Smart meters (gas, electricity, water, etc.) play a fundamental role on the implementation of the Smart Grid concept. Nevertheless, the rollout of smart meters needed to achieve the foreseen benefits of the integrated network of devices is still slow. Among the reasons for the slower pace is the lack of trust on electronic devices and new kinds of frauds based on clever tampering and collusion. These facts have been challenging service providers and imposing great revenues losses. This paper presents a use case of forensics investigation procedures applied to detect electricity theft based on tampered electronic devices. The collusion fraud draw our attention for the involved amounts (losses) caused to the provider and the technique applied to hide fraud evidences.
Keywords: Electricity; Energy consumption; Microcontrollers; Radio frequency; Security; Sensors; Switches; electricity measurement fraud; electronic meter; forensics investigation procedure; tampering technique (ID#: 15-3454)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957287&isnumber=6957265
Shulman, Haya; Waidner, Michael, "Towards Forensic Analysis of Attacks with DNSSEC," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 69, 76, 17-18 May 2014. doi: 10.1109/SPW.2014.20 DNS cache poisoning is a stepping stone towards advanced (cyber) attacks, and can be used to monitor users' activities, for censorship, to distribute malware and spam, and even to subvert correctness and availability of Internet networks and services. The DNS infrastructure relies on challenge-response defences, which are deemed effective for thwarting attacks by (the common) off-path adversaries. Such defences do not suffice against stronger adversaries, e.g., man-in-the-middle (MitM). However, there seems to be little willingness to adopt systematic, cryptographic mechanisms, since stronger adversaries are not believed to be common. In this work we validate this assumption and show that it is imprecise. In particular, we demonstrate that: (1) attackers can frequently obtain MitM capabilities, and (2) even weaker attackers can subvert DNS security. Indeed, as we show, despite wide adoption of challenge-response defences, cache-poisoning attacks against DNS infrastructure are highly prevalent. We evaluate security of domain registrars and name servers, experimentally, and find vulnerabilities, which expose DNS infrastructure to cache poisoning. We review DNSSEC, the defence against DNS cache poisoning, and argue that, not only it is the most suitable mechanism for preventing cache poisoning attacks, but it is also the only proposed defence that enables a-posteriori forensic analysis of attacks. Specifically, DNSSEC provides cryptographic evidences, which can be presented to, and validated by, any third party and can be used in investigations and for detection of attacks even long after the attack took place.
Keywords: Computer crime; Cryptography; Forensics; Internet; Routing; Servers; DNS cache-poisoning; DNSSEC; cryptographic evidences;cyber attacks; digital signatures; security (ID#: 15-3455)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957288&isnumber=6957265
Iedemska, Jane; Stringhini, Gianluca; Kemmerer, Richard; Kruegel, Christopher; Vigna, Giovanni, "The Tricks of the Trade: What Makes Spam Campaigns Successful?," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 77, 83, 17-18 May 2014. doi: 10.1109/SPW.2014.21 Spam is a profitable business for cyber criminals, with the revenue of a spam campaign that can be in the order of millions of dollars. For this reason, a wealth of research has been performed on understanding how spamming botnets operate, as well as what the economic model behind spam looks like. Running a spamming botnet is a complex task: the spammer needs to manage the infected machines, the spam content being sent, and the email addresses to be targeted, among the rest. In this paper, we try to understand which factors influence the spam delivery process and what characteristics make a spam campaign successful. To this end, we analyzed the data stored on a number of command and control servers of a large spamming botnet, together with the guidelines and suggestions that the botnet creators provide to spammers to improve the performance of their botnet.
Keywords: Databases; Guidelines; Manuals; Mathematical model; Servers; Unsolicited electronic mail; Botnet; Cybercrime; Spam (ID#: 15-3456)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957289&isnumber=6957265
Sarvari, Hamed; Abozinadah, Ehab; Mbaziira, Alex; Mccoy, Damon, "Constructing and Analyzing Criminal Networks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.84,91, 17-18 May 2014 doi: 10.1109/SPW.2014.22 Analysis of criminal social graph structures can enable us to gain valuable insights into how these communities are organized. Such as, how large scale and centralized these criminal communities are currently? While these types of analysis have been completed in the past, we wanted to explore how to construct a large scale social graph from a smaller set of leaked data that included only the criminal's email addresses. We begin our analysis by constructing a 43 thousand node social graph from one thousand publicly leaked criminals' email addresses. This is done by locating Facebook profiles that are linked to these same email addresses and scraping the public social graph from these profiles. We then perform a large scale analysis of this social graph to identify profiles of high rank criminals, criminal organizations and large scale communities of criminals. Finally, we perform a manual analysis of these profiles that results in the identification of many criminally focused public groups on Facebook. This analysis demonstrates the amount of information that can be gathered by using limited data leaks.
Keywords: Communities; Electronic mail; Facebook; Joining processes; Manuals; Organizations; analysis; community detection; criminal networks; cybercrime; social graph (ID#: 15-3457)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957290&isnumber=6957265
Grabska, Iwona; Szczypiorski, Krzysztof, "Steganography in Long Term Evolution Systems," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 92, 99, 17-18 May 2014. doi: 10.1109/SPW.2014.23 This paper contains a description and analysis of a new steganographic method, called LaTEsteg, designed for LTE (Long Term Evolution) systems. The LaTEsteg uses physical layer padding of packets sent over LTE networks. This method allows users to gain additional data transfer that is invisible to unauthorized parties that are unaware of hidden communication. Three important parameters of the LaTESteg are defined and evaluated: performance, cost and security.
Keywords: Channel capacity; IP networks; Long Term Evolution; Phase shift keying; Proposals; Protocols;Throughput;4G;LTE;Steganographic Algorithm; Steganographic Channel; Steganography (ID#: 15-3458)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957291&isnumber=6957265
Lipinski, Bartosz; Mazurczyk, Wojciech; Szczypiorski, Krzysztof, "Improving Hard Disk Contention-Based Covert Channel in Cloud Computing," Security and Privacy Workshops (SPW), 2014 IEEE, pp.100,107, 17-18 May 2014. doi: 10.1109/SPW.2014.24 Steganographic methods allow the covert exchange of secret data between parties aware of the procedure. The cloud computing environment is a new and emerging target for steganographers, but currently not many solutions have been proposed. This paper proposes CloudSteg, which is a steganographic method that creates a covert channel based on hard disk contention between the two cloud instances that reside on the same physical machine. Experimental results conducted using open-source cloud environment Open Stack show that CloudSteg is able to achieve a bandwidth of about 0.1 bps, which is 1000 times higher than is known from the state-of-the-art version.
Keywords: Bandwidth; Cloud computing; Computational modeling; Hard disks; Robustness; Synchronization; cloud computing; covert channel; information hiding; steganography (ID#: 15-3459)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957292&isnumber=6957265
Narang, Pratik; Ray, Subhajit; Hota, Chittaranjan; Venkatakrishnan, Venkat, "PeerShark: Detecting Peer-to-Peer Botnets by Tracking Conversations," Security and Privacy Workshops (SPW), 2014 IEEE, pp.108,115, 17-18 May 2014. doi: 10.1109/SPW.2014.25 The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose Peer Shark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple 'flow-based' detection approach, we use a 2-tuple 'conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. Peer Shark could also classify different P2P applications with an accuracy of more than 95%.
Keywords: Electronic mail; Feature extraction; Firewalls (computing); IP networks; Internet; Peer-to-peer computing; Ports (Computers); botnet; machine learning; peer-to-peer (ID#: 15-3460)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957293&isnumber=6957265
Drew, Jake; Moore, Tyler, "Automatic Identification of Replicated Criminal Websites Using Combined Clustering," Security and Privacy Workshops (SPW), 2014 IEEE, pp.116, 123, 17-18 May 2014 doi: 10.1109/SPW.2014.26 To be successful, cyber criminals must figure out how to scale their scams. They duplicate content on new websites, often staying one step ahead of defenders that shut down past schemes. For some scams, such as phishing and counterfeit-goods shops, the duplicated content remains nearly identical. In others, such as advanced-fee fraud and online Ponzi schemes, the criminal must alter content so that it appears different in order to evade detection by victims and law enforcement. Nevertheless, similarities often remain, in terms of the website structure or content, since making truly unique copies does not scale well. In this paper, we present a novel combined clustering method that links together replicated scam websites, even when the criminal has taken steps to hide connections. We evaluate its performance against two collected datasets of scam websites: fake-escrow services and high-yield investment programs (HYIPs). We find that our method more accurately groups similar websites together than does existing general-purpose consensus clustering methods.
Keywords: Clustering algorithms; Clustering methods; HTML; Indexes; Investment; Manuals; Sociology (ID#: 15-3461)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957294&isnumber=6957265
Peersman, Claudia; Schulze, Christian; Rashid, Awais; Brennan, Margaret; Fischer, Carl, "iCOP: Automatically Identifying New Child Abuse Media in P2P Networks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.124,131, 17-18 May 2014. doi: 10.1109/SPW.2014.27 The increasing levels of child sex abuse (CSA) media being shared in peer-to-peer (P2P) networks pose a significant challenge for law enforcement agencies. Although a number of P2P monitoring tools to detect offender activity in such networks exist, they typically rely on hash value databases of known CSA media. Such an approach cannot detect new or previously unknown media being shared. Conversely, identifying such new previously unknown media is a priority for law enforcement - they can be indicators of recent or on-going child abuse. Furthermore, originators of such media can be hands-on abusers and their apprehension can safeguard children from further abuse. The sheer volume of activity on P2P networks, however, makes manual detection virtually infeasible. In this paper, we present a novel approach that combines sophisticated filename and media analysis techniques to automatically flag new previously unseen CSA media to investigators. The approach has been implemented into the iCOP toolkit. Our evaluation on real case data shows high degrees of accuracy while hands-on trials with law enforcement officers highlight iCOP's usability and its complementarity to existing investigative workflows.
Keywords: Engines; Feature extraction; Law enforcement; Media; Skin; Streaming media; Visualization; child protection; cyber crime; image classification; paedophilia; peer-to-peer computing; text analysis (ID#: 15-3462)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957295&isnumber=6957265
Gokcen, Yasemin; Foroushani, Vahid Aghaei; Heywood, A.Nur Zincir, "Can We Identify NAT Behavior by Analyzing Traffic Flows?," Security and Privacy Workshops (SPW), 2014 IEEE, pp.132,139, 17-18 May 2014. doi: 10.1109/SPW.2014.28 It is shown in the literature that network address translation devices have become a convenient way to hide the source of malicious behaviors. In this research, we explore how far we can push a machine learning (ML) approach to identify such behaviors using only network flows. We evaluate our proposed approach on different traffic data sets against passive fingerprinting approaches and show that the performance of a machine learning approach is very promising even without using any payload (application layer) information.
Keywords: Browsers; Classification algorithms; Computers; Fingerprint recognition; IP networks; Internet; Payloads; Network address translation classification; machine learning; traffic analysis; traffic flows (ID#: 15-3463)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957296&isnumber=6957265
Jaeger, Eric; Levillain, Olivier, "Mind Your Language(s): A Discussion about Languages and Security," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 140, 151, 17-18 May 2014. doi: 10.1109/SPW.2014.29 Following several studies conducted by the French Network and Information Security Agency (ANSSI), this paper discusses the question of the intrinsic security characteristics of programming languages. Through illustrations and discussions, it advocates for a different vision of well-known mechanisms and is intended to provide some food for thoughts regarding languages and development tools.
Keywords: Cryptography; Encapsulation; Java; Software; Standards; compilation; evaluation; programming languages; security; software development; software engineering (ID#: 15-3464)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957297&isnumber=6957265
Volpano, pp.152,157, 17-18 May 2014. doi: 10.1109/SPW.2014.30 A fundamental unit of computation is introduced for reactive programming called the LEGO(TM) brick. It is targeted for domains in which JavaScript runs in an attempt to allow a user to build a trustworthy reactive program on demand rather than try to analyze JavaScript. A formal definition is given for snapping bricks together based on the standard product construction for deterministic finite automata.
Keywords: Adders; Automata; Browsers; Delays; Keyboards; Mice; Programming; programming methodology (ID#: 15-3465)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957298&isnumber=6957265
Bangert, Julian; Zeldovich, Nickolai, "Nail: A Practical Interface Generator for Data Formats," Security and Privacy Workshops (SPW), 2014 IEEE, pp.158, 166, 17-18 May 2014. doi: 10.1109/SPW.2014.31 We present Nail, an interface generator that allows programmers to safely parse and generate protocols defined by a Parser-Expression based grammar. Nail uses a richer set of parser combinators that induce an internal representation, obviating the need to write semantic actions. Nail also provides solutions parsing common patterns such as length and offset fields within binary formats that are hard to process with existing parser generators.
Keywords: Data models; Generators; Grammar; Nails; Protocols; Semantics; Syntactics; Binary formats; LangSec; Offset field; Output; Parsing (ID#: 15-3470)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957299&isnumber=6957265
Petullo, W.Michael; Fei, Wenyuan; Solworth, Jon A.; Gavlin, Pat, "Ethos' Deeply Integrated Distributed Types," Security and Privacy Workshops (SPW), 2014 IEEE, pp.167,180, 17-18 May 2014. doi: 10.1109/SPW.2014.32 Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
Keywords: Kernel; Protocols; Robustness; Runtime; Safety; Security; Semantics; Operating system; language-theoretic security; type system (ID#: 15-3471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957300&isnumber=6957265
Goodspeed, Travis, "Phantom Boundaries and Cross-Layer Illusions in 802.15.4 Digital Radio," Security and Privacy Workshops (SPW), 2014 IEEE, pp.181,184, 17-18 May 2014. doi: 10.1109/SPW.2014.33 The classic design of protocol stacks, where each layer of the stack receives and unwraps the payload of the next layer, implies that each layer has a parser that accepts Protocol Data Units and extracts the intended Service Data Units from them. The PHY layer plays a special role, because it must create frames, i.e., original PDUs, from a stream of bits or symbols. An important property implicitly expected from these parsers is that SDUs are passed to the next layer only if the encapsulating PDUs from all previous layers were received exactly as transmitted by the sender and were syntactically correct. The Packet-in-packet attack (WOOT 2011) showed that this false assumption could be easily violated and exploited on IEEE 802.15.4 and similar PHY layers, however, it did not challenge the assumption that symbols and bytes recognized by the receiver were as transmitted by the sender. This work shows that even that assumption is wrong: in fact, a valid received frame may share no symbols with the sent one! This property is due to a particular choice of low-level chip encoding of 802.15.4, which enables the attacker to co-opt the receiver's error correction. This case study demonstrates that PHY layer logic is as susceptible to the input language manipulation attacks as other layers, or perhaps more so. Consequently, when designing protocol stacks, language-theoretic considerations must be taken into account from the very bottom of the PHY layer, no layer is too low to be considered "mere engineering.''
Keywords: Automata; Error correction codes; IEEE 802.15 Standards; Noise; Protocols; Receivers; Security; 802.15.4; LangSec; Packet-in-packet (ID#: 15-3472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957301&isnumber=6957265
Graham, Robert David; Johnson, Peter C., "Finite State Machine Parsing for Internet Protocols: Faster Than You Think," Security and Privacy Workshops (SPW), 2014 IEEE, pp.185,190, 17-18 May 2014. doi: 10.1109/SPW.2014.34 A parser's job is to take unstructured, opaque data and convert it to a structured, semantically meaningful format. As such, parsers often operate at the border between untrusted data sources (e.g., the Internet) and the soft, chewy center of computer systems, where performance and security are paramount. A firewall, for instance, is precisely a trust-creating parser for Internet protocols, permitting valid packets to pass through and dropping or actively rejecting malformed packets. Despite the prevalence of finite state machines (FSMs) in both protocol specifications and protocol implementations, they have gained little traction in parser code for such protocols. Typical reasons for avoiding the FSM computation model claim poor performance, poor scalability, poor expressibility, and difficult or time-consuming programming. In this research report, we present our motivations for and designs of finite state machines to parse a variety of existing Internet protocols, both binary and ASCII. Our hand-written parsers explicitly optimize around L1 cache hit latency, branch misprediction penalty, and program-wide memory overhead to achieve aggressive performance and scalability targets. Our work demonstrates that such parsers are, contrary to popular belief, sufficiently expressive for meaningful protocols, sufficiently performant for high-throughput applications, and sufficiently simple to construct and maintain. We hope that, in light of other research demonstrating the security benefits of such parsers over more complex, Turing-complete codes, our work serves as evidence that certain ``practical'' reasons for avoiding FSM-based parsers are invalid.
Keywords: Automata; Internet; Pipelines; Program processors; Protocols; Servers; Switches (ID#: 15-3473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957302&isnumber=6957265
Levillain, Olivier, "Parsifal: A Pragmatic Solution to the Binary Parsing Problems," Security and Privacy Workshops (SPW), 2014 IEEE, pp.191, 197, 17-18 May 2014. doi: 10.1109/SPW.2014.35 Parsers are pervasive software basic blocks: as soon as a program needs to communicate with another program or to read a file, a parser is involved. However, writing robust parsers can be difficult, as is revealed by the amount of bugs and vulnerabilities related to programming errors in parsers. It is especially true for network analysis tools, which led the network and protocols laboratory of the French Network and Information Security Agency (ANSSI) to write custom tools. One of them, Parsifal, is a generic framework to describe parsers in OCaml, and gave us some insight into binary formats and parsers. After describing our tool, this article presents some use cases and lessons we learned about format complexity, parser robustness and the role the language used played.
Keywords: Containers; Density estimation robust algorithm; Internet; Protocols; Robustness; Standards; Writing; OCaml; Parsifal; binary parsers; code generation; preprocessor (ID#: 15-3474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957303&isnumber=6957265
Bogk, Andreas; Schopl, Marco, "The Pitfalls of Protocol Design: Attempting to Write a Formally Verified PDF Parser," Security and Privacy Workshops (SPW), 2014 IEEE, pp.198, 203, 17-18 May 2014 doi: 10.1109/SPW.2014.36 Parsers for complex data formats generally present a big attack surface for input-driven exploitation. In practice, this has been especially true for implementations of the PDF data format, as witnessed by dozens of known vulnerabilities exploited in many real world attacks, with the Acrobat Reader implementation being the main target. In this report, we describe our attempts to use Coq, a theorem prover based on a functional programming language making use of dependent types and the Curry-Howard isomorphism, to implement a formally verified PDF parser. We ended up implementing a subset of the PDF format and proving termination of the combinator-based parser. Noteworthy results include a dependent type representing a list of strictly monotonically decreasing length of remaining symbols to parse, which allowed us to show termination of parser combinators. Also, difficulties showing termination of parsing some features of the PDF format readily translated into denial of service attacks against existing PDF parsers-we came up with a single PDF file that made all the existing PDF implementations we could test enter an endless loop.
Keywords: Indexes; Portable document format; Privacy; Security; Software; Syntactics; Writing (ID#: 15-3475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957304&isnumber=6957265
Kompalli, Sarat, "Using Existing Hardware Services for Malware Detection," Security and Privacy Workshops (SPW), 2014 IEEE, pp.204,208, 17-18 May 2014. doi: 10.1109/SPW.2014.49 The paper is divided into two sections. First, we describe our experiments in using hardware-based metrics such as those collected by the BPU and MMU for detection of malware activity at runtime. Second, we sketch a defense-in-depth security model that combines such detection with hardware-aided proof-carrying code and input validation.
Keywords: Hardware; IP networks; Malware; Monitoring; Software; System-on-chip; data security; malware; security in hardware (ID#: 15-3476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957305&isnumber=6957265
Vanegue, Julien, "The Weird Machines in Proof-Carrying Code," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 209, 213, 17-18 May 2014. doi: 10.1109/SPW.2014.37 We review different attack vectors on Proof-Carrying Code (PCC) related to policy, memory model, machine abstraction, and formal system. We capture the notion of weird machines in PCC to formalize the shadow execution arising in programs when their proofs do not sufficiently capture and disallow the execution of untrusted computations. We suggest a few ideas to improve existing PCC systems so they are more resilient to memory attacks.
Keywords: Abstracts; Computational modeling; Program processors;Registers; Safety; Security; Semantics; FPCC; Machines; PCC; Weird (ID#: 15-3477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957306&isnumber=6957265
Nurse, Jason R.C.; Buckley, Oliver; Legg, Philip A.; Goldsmith, Michael; Creese, Sadie; Wright, Gordon R.T.; Whitty, Monica, "Understanding Insider Threat: A Framework for Characterising Attacks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.214,228, 17-18 May 2014. doi: 10.1109/SPW.2014.38 The threat that insiders pose to businesses, institutions and governmental organisations continues to be of serious concern. Recent industry surveys and academic literature provide unequivocal evidence to support the significance of this threat and its prevalence. Despite this, however, there is still no unifying framework to fully characterise insider attacks and to facilitate an understanding of the problem, its many components and how they all fit together. In this paper, we focus on this challenge and put forward a grounded framework for understanding and reflecting on the threat that insiders pose. Specifically, we propose a novel conceptualisation that is heavily grounded in insider-threat case studies, existing literature and relevant psychological theory. The framework identifies several key elements within the problem space, concentrating not only on noteworthy events and indicators- technical and behavioural- of potential attacks, but also on attackers (e.g., the motivation behind malicious threats and the human factors related to unintentional ones), and on the range of attacks being witnessed. The real value of our framework is in its emphasis on bringing together and defining clearly the various aspects of insider threat, all based on real-world cases and pertinent literature. This can therefore act as a platform for general understanding of the threat, and also for reflection, modelling past attacks and looking for useful patterns.
Keywords: Companies; Context; Educational institutions; Employment; History; Psychology; Security; attack chain; case studies; insider threat; psychological indicators; technical; threat framework (ID#: 15-3478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957307&isnumber=6957265
Kammuller, Florian; Probst, Christian W., "Combining Generated Data Models with Formal Invalidation for Insider Threat Analysis," Security and Privacy Workshops (SPW), 2014 IEEE, pp.229,235, 17-18 May 2014. doi: 10.1109/SPW.2014.45 In this paper we revisit the advances made on invalidation policies to explore attack possibilities in organizational models. One aspect that has so far eloped systematic analysis of insider threat is the integration of data into attack scenarios and its exploitation for analyzing the models. We draw from recent insights into generation of insider data to complement a logic based mechanical approach. We show how insider analysis can be traced back to the early days of security verification and the Lowe-attack on NSPK. The invalidation of policies allows modelchecking organizational structures to detect insider attacks. Integration of higher order logic specification techniques allows the use of data refinement to explore attack possibilities beyond the initial system specification. We illustrate this combined invalidation technique on the classical example of the naughty lottery fairy. Data generation techniques support the automatic generation of insider attack data for research. The data generation is however always based on human generated insider attack scenarios that have to be designed based on domain knowledge of counter-intelligence experts. Introducing data refinement and invalidation techniques here allows the systematic exploration of such scenarios and exploit data centric views into insider threat analysis.
Keywords: Analytical models; Computational modeling; Data models; Internet; Protocols; Public key; Insider threats; policies; formal methods (ID#: 15-3479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957308&isnumber=6957265
Greitzer, Frank L.; Strozer, Jeremy R.; Cohen, Sholom; Moore, Andrew P.; Mundie, David; Cowley, Jennifer, "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits," Security and Privacy Workshops (SPW), 2014 IEEE, pp.236, 250, 17-18 May 2014. doi: 10.1109/SPW.2014.39 Organizations often suffer harm from individuals who bear no malice against them but whose actions unintentionally expose the organizations to risk-the unintentional insider threat (UIT). In this paper we examine UIT cases that derive from social engineering exploits. We report on our efforts to collect and analyze data from UIT social engineering incidents to identify possible behavioral and technical patterns and to inform future research and development of UIT mitigation strategies.
Keywords: Computers; Context; Educational institutions; Electronic mail; Organizations; Security; Taxonomy; social engineering; unintentional insider threat (ID#: 15-3480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957309&isnumber=6957265
Bishop, Matt; Conboy, Heather M.; Phan, Huong; Simidchieva, Borislava I.; Avrunin, George S.; Clarke, Lori A.; Osterweil, Leon J.; Peisert, Sean, "Insider Threat Identification by Process Analysis," Security and Privacy Workshops (SPW), 2014 IEEE, pp.251,264, 17-18 May 2014. doi: 10.1109/SPW.2014.40 The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential "insider"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.
Keywords: Analytical models; Drugs; Fault trees; Hazards; Logic gates; Nominations and elections; Software; data exfiltration; elections; insider threat; process modeling; sabotage (ID#: 15-3481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957310&isnumber=6957265
Sarkar, Anandarup; Kohler, Sven; Riddle, Sean; Ludaescher, Bertram; Bishop, Matt, "Insider Attack Identification and Prevention Using a Declarative Approach," Security and Privacy Workshops (SPW), 2014 IEEE, pp.265,276, 17-18 May 2014. doi: 10.1109/SPW.2014.41 A process is a collection of steps, carried out using data, by either human or automated agents, to achieve a specific goal. The agents in our process are insiders, they have access to different data and annotations on data moving in between the process steps. At various points in a process, they can carry out attacks on privacy and security of the process through their interactions with different data and annotations, via the steps which they control. These attacks are sometimes difficult to identify as the rogue steps are hidden among the majority of the usual non-malicious steps of the process. We define process models and attack models as data flow based directed graphs. An attack A is successful on a process P if there is a mapping relation from A to P that satisfies a number of conditions. These conditions encode the idea that an attack model needs to have a corresponding similarity match in the process model to be successful. We propose a declarative approach to vulnerability analysis. We encode the match conditions using a set of logic rules that define what a valid attack is. Then we implement an approach to generate all possible ways in which agents can carry out a valid attack A on a process P, thus informing the process modeler of vulnerabilities in P. The agents, in addition to acting by themselves, can also collude to carry out an attack. Once A is found to be successful against P, we automatically identify improvement opportunities in P and exploit them, eliminating ways in which A can be carried out against it. The identification uses information about which steps in P are most heavily attacked, and try to find improvement opportunities in them first, before moving onto the lesser attacked ones. We then evaluate the improved P to check if our improvement is successful. This cycle of process improvement and evaluation iterates until A is completely thwarted in all possible ways.
Keywords: Data models; Diamonds; Impedance matching; Nominations and elections; Process control; Robustness; Security; Declarative Programming; Process Modeling; Vulnerability Analysis (ID#: 15-3482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957311&isnumber=6957265
Young, William T.; Memory, Alex; Goldberg, Henry G.; Senator, Ted E., "Detecting Unknown Insider Threat Scenarios," Security and Privacy Workshops (SPW), 2014 IEEE pp.277,288, 17-18 May 2014. doi: 10.1109/SPW.2014.42 This paper reports results from a set of experiments that evaluate an insider threat detection prototype on its ability to detect scenarios that have not previously been seen or contemplated by the developers of the system. We show the ability to detect a large variety of insider threat scenario instances imbedded in real data with no prior knowledge of what scenarios are present or when they occur. We report results of an ensemble-based, unsupervised technique for detecting potential insider threat instances over eight months of real monitored computer usage activity augmented with independently developed, unknown but realistic, insider threat scenarios that robustly achieves results within 5% of the best individual detectors identified after the fact. We explore factors that contribute to the success of the ensemble method, such as the number and variety of unsupervised detectors and the use of prior knowledge encoded in scenario-based detectors designed for known activity patterns. We report results over the entire period of the ensemble approach and of ablation experiments that remove the scenario-based detectors.
Keywords: Computers; Detectors ;Feature extraction; Monitoring; Organizations; Prototypes; Uniform resource locators; anomaly detection; experimental case study; insider threat; unsupervised ensembles (ID#: 15-3483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957312&isnumber=6957265
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Workshop on Visualization for Cyber Security (VizSec 2014) Paris |
The eleventh workshop on visualization in security was held on 10 November 2014 in Paris, France.
Conference focus was to explore effective, scalable visual interfaces for security domains, where visualization may provide a distinct benefit, including computer forensics, reverse engineering, insider threat detection, cryptography, privacy, preventing user assisted attacks, compliance management, wireless security, secure coding, and penetration testing in addition to traditional network security. The VizSec 2014 presentations are all available at the VizSec Vimeo group site at: http://www.vizsec.org/vizsec2014/ and the ACM digital library at the URLs listed.
Diane Staheli, Tamara Yu, R. Jordan Crouser, Suresh Damodaran, Kevin Nam, David O'Gwynn, Sean McKenna, Lane Harrison ; Visualization Evaluation For Cyber Security: Trends And Future Directions; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 49-56. Doi: 10.1145/2671491.2671492 The Visualization for Cyber Security research community (VizSec) addresses longstanding challenges in cyber security by adapting and evaluating information visualization techniques with application to the cyber security domain. This research effort has created many tools and techniques that could be applied to improve cyber security, yet the community has not yet established unified standards for evaluating these approaches to predict their operational validity. In this paper, we survey and categorize the evaluation metrics, components, and techniques that have been utilized in the past decade of VizSec research literature. We also discuss existing methodological gaps in evaluating visualization in cyber security, and suggest potential avenues for future research in order to help establish an agenda for advancing the state-of-the-art in evaluating cyber security visualizations.
Keywords: cyber security, evaluation, information visualization (ID#: 15-3572)
URL: http://doi.acm.org/10.1145/2671491.2671492
Christopher Humphries, Nicolas Prigent, Christophe Bidan, Frédéric Majorczyk; CORGI: Combination, Organization And Reconstruction Through Graphical Interactions; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 57-64 doi: 10.1145/2671491.2671494 In this article, we present CORGI, a security-oriented log visualization tool that allows security experts to visually explore and link numerous types of log files through relevant representations and global filtering. The analyst can mark values as values of interest and then use these values to pursue the exploration in other log files, allowing him to better understand events and reconstruct attack scenarios. We present the user interface and interactions that ensure these capabilities and provide two use cases based on challenges from VAST and from the Honeynet project.
Keywords: forensics, intrusion detection, visualization (ID#: 15-3573)
URL: http://doi.acm.org/10.1145/2671491.2671494
Siming Chen, Cong Guo, Xiaoru Yuan, Fabian Merkle, Hanna Schaefer, Thomas Ertl; OCEANS: Online Collaborative Explorative Analysis On Network Security; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 1-8. Doi: 10.1145/2671491.2671493 Visualization and interactive analysis can help network administrators and security analysts analyze the network flow and log data. The complexity of such an analysis requires a combination of knowledge and experience from more domain experts to solve difficult problems faster and with higher reliability. We developed an online visual analysis system called OCEANS to address this topic by allowing close collaboration among security analysts to create deeper insights in detecting network events. Loading the heterogeneous data source (netflow, IPS log and host status log), OCEANS provides a multi-level visualization showing temporal overview, IP connections and detailed connections. Participants can submit their findings through the visual interface and refer to others' existing findings. Users can gain inspiration from each other and collaborate on finding subtle events and targeting multi-phase attacks. Our case study confirms that OCEANS is intuitive to use and can improve efficiency. The crowd collaboration helps the users comprehend the situation and reduce false alarms.
Keywords: collaborative visual analytics, network security, situation awareness (ID#: 15-3574)
URL: http://doi.acm.org/10.1145/2671491.2671493
Tobias Wüchner, Alexander Pretschner, Martín Ochoa; DAVAST: Data-Centric System Level Activity Visualization; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 25-32. Doi: 10.1145/2671491.2671499 Host-based intrusion detection systems need to be complemented by analysis tools that help understand if malware or attackers have indeed intruded, what they have done, and what the consequences are. We present a tool that visualizes system activities as data flow graphs: nodes are operating system entities such as processes, files, and sockets; edges are data flows between the nodes. Pattern matching identifies structures that correspond to (suspected) malicious and (suspected) normal behaviors. Matches are highlighted in slices of the data flow graph. As a proof of concept, we show how email worm attacks, drive-by downloads, and data leakage are detected, visualized, and analyzed.
Keywords: (not provided) (ID#: 15-3575)
URL: http://doi.acm.org/10.1145/2671491.2671499
J. Joseph Fowler, Thienne Johnson, Paolo Simonetto, Michael Schneider, Carlos Acedo, Stephen Kobourov, Loukas Lazos; IMap: Visualizing Network Activity Over Internet Maps; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 80-87. Doi: 10.1145/2671491.2671501 We propose a novel visualization, IMap, which enables the detection of security threats by visualizing a large volume of dynamic network data. In IMap, the Internet topology at the Autonomous System (AS) level is represented by a canonical map (which resembles a geographic map of the world), and aggregated IP traffic activity is superimposed in the form of heat maps (intensity overlays). Specifically, IMap groups ASes as contiguous regions based on AS attributes (geo-location, type, rank, IP prefix space) and AS relationships. The area, boundary, and relative positions of these regions in the map do not reflect actual world geography, but are determined by the characteristics of the Internet's AS topology. To demonstrate the effectiveness of IMap, we showcase two case studies, a simulated DDoS attack and a real-world worm propagation attack.
Keywords: anomaly, map, network, security, topology visualization (ID#: 15-3576)
URL: http://doi.acm.org/10.1145/2671491.2671501
Robert Gove, Joshua Saxe, Sigfried Gold, Alex Long, Giacomo Bergamo; SEEM: a Scalable Visualization For Comparing Multiple Large Sets Of Attributes For Malware Analysis; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 72-79. Doi: 10.1145/2671491.2671496 Recently, the number of observed malware samples has rapidly increased, expanding the workload for malware analysts. Most of these samples are not truly unique, but are related through shared attributes. Identifying these attributes can enable analysts to reuse analysis and reduce their workload. Visualizing malware attributes as sets could enable analysts to better understand the similarities and differences between malware. However, existing set visualizations have difficulty displaying hundreds of sets with thousands of elements, and are not designed to compare different types of elements between sets, such as the imported DLLs and callback domains across malware samples. Such analysis might help analysts, for example, to understand if a group of malware samples are behaviorally different or merely changing where they send data. To support comparisons between malware samples' attributes we developed the Similarity Evidence Explorer for Malware (SEEM), a scalable visualization tool for simultaneously comparing a large corpus of malware across multiple sets of attributes (such as the sets of printable strings and function calls). SEEM's novel design breaks down malware attributes into sets of meaningful categories to compare across malware samples, and further incorporates set comparison overviews and dynamic filtering to allow SEEM to scale to hundreds of malware samples while still allowing analysts to compare thousands of attributes between samples. We demonstrate how to use SEEM by analyzing a malware sample from the Mandiant APT1 New York Times intrusion dataset. Furthermore, we describe a user study with five cyber security researchers who used SEEM to rapidly and successfully gain insight into malware after only 15 minutes of training.
Keywords: computer security, malware, sets, venn diagrams, visualization (ID#: 15-3577)
URL: http://doi.acm.org/10.1145/2671491.2671496
Fabian Fischer, Daniel A. Keim; NStreamAware: Real-Time Visual Analytics For Data Streams To Enhance Situational Awareness; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 65-72. Doi: 10.1145/2671491.2671495 The analysis of data streams is important in many security-related domains to gain situational awareness. To provide monitoring and visual analysis of such data streams, we propose a system, called NStreamAware, that uses modern distributed processing technologies to analyze streams using stream slices, which are presented to analysts in a web-based visual analytics application, called NVisAware. Furthermore, we visually guide the user in the feature selection process to summarize the slices to focus on the most interesting parts of the stream based on introduced expert knowledge of the analyst. We show through case studies, how the system can be used to gain situational awareness and eventually enhance network security. Furthermore, we apply the system to a social media data stream to compete in an international challenge to evaluate the applicability of our approach to other domains.
Keywords: data streams, network security, real-time processing, situational awareness, visual analytics (ID#: 15-3578)
URL: http://doi.acm.org/10.1145/2671491.2671495
Daniel M. Best, Alex Endert, Daniel Kidwell; 7 Key Challenges for Visualization In Cyber Network Defense; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 33-40. Doi: 10.1145/2671491.2671497 What does it take to be a successful visualization in cyber security? This question has been explored for some time, resulting in many potential solutions being developed and offered to the cyber security community. However, when one reflects upon the successful visualizations in this space they are left wondering where all those offerings have gone. Excel and Grep are still the kings of cyber security defense tools; there is a great opportunity to help in this domain, yet many visualizations fall short and are not utilized. In this paper we present seven challenges, informed by two user studies, to be considered when developing a visualization for cyber security purposes. Cyber security visualizations must go beyond isolated solutions and "pretty picture" visualizations in order to impact users. We provide an example prototype that addresses the challenges with a description of how they are met. Our aim is to assist in increasing utility and adoption rates for visualization capabilities in cyber security.
Keywords: cyber security, defense, visualization (ID#: 15-3579)
URL: http://doi.acm.org/10.1145/2671491.2671497
Alexander Long, Joshua Saxe, Robert Gove; Detecting Malware Samples With Similar Image Sets; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 88-95. Doi: 10.1145/2671491.2671500 This paper proposes a method for identifying and visualizing similarity relationships between malware samples based on their embedded graphical assets (such as desktop icons and button skins). We argue that analyzing such relationships has practical merit for a number of reasons. For example, we find that malware desktop icons are often used to trick users into running malware programs, so identifying groups of related malware samples based on these visual features can highlight themes in the social engineering tactics of today's malware authors. Also, when malware samples share rare images, these image sharing relationships may indicate that the samples were generated or deployed by the same adversaries. To explore and evaluate this malware comparison method, the paper makes two contributions. First, we provide a scalable and intuitive method for computing similarity measurements between malware based on the visual similarity of their sets of images. Second, we give a visualization method that combines a force-directed graph layout with a set visualization technique so as to highlight visual similarity relationships in malware corpora. We evaluate the accuracy of our image set similarity comparison method against a hand curated malware relationship ground truth dataset, finding that our method performs well. We also evaluate our overall concept through a small qualitative study we conducted with three cyber security researchers. Feedback from the researchers confirmed our use cases and suggests that computer network defenders are interested in this capability.
Keywords: human computer interaction, malware, security, visualization (ID#: 15-3580)
URL: http://doi.acm.org/10.1145/2671491.2671500
Jan-Erik Stange, Marian Dörk, Johannes Landstorfer, Reto Wettach; Visual Filter: Graphical Exploration Of Network Security Log Files; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 41-48. Doi: 10.1145/2671491.2671503 Network log files often need to be investigated manually for suspicious activity. The huge amount of log lines complicates maintaining an overview, navigation and quick pattern identification. We propose a system that uses an interactive visualization, a visual filter, representing the whole log in an overview, allowing to navigate and make context-preserving subselections with the visualization and in this way reducing the time and effort for security experts needed to identify patterns in the log file. This explorative interactive visualization is combined with focused querying to search for known suspicious terms that are then highlighted in the visualization and the log file itself.
Keywords: dynamic querying, exploratory search, human pattern recognition, overview and detail, visual filter (ID#: 15-3581)
URL: http://doi.acm.org/10.1145/2671491.2671503
Simon Walton, Eamonn Maguire, Min Chen; Multiple Queries With Conditional Attributes (Qcats) For Anomaly Detection And Visualization: VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 17-24. Doi: 10.1145/2671491.2671502 This paper describes a visual analytics method for visualizing the effects of multiple anomaly detection models, exploring the complex model space of a specific type of detection method, namely Query with Conditional Attributes (QCAT), and facilitating the construction of composite models using multiple QCATs. We have developed a prototype system that features a browser-based interface, and database-driven back end. We tested the system using the "Inside Threats Dataset" provided by CMU.
Keywords: QCAT, anomaly detection, information theory, model visualization, multivariate data visualization, parallel coordinates, visual analytics (ID#: 15-3582)
URL: http://doi.acm.org/10.1145/2671491.2671502
Markus Wagner, Wolfgang Aigner, Alexander Rind, Hermann Dornhackl, Konstantin Kadletz, Robert Luh, Paul Tavolato; Problem Characterization And Abstraction For Visual Analytics In Behavior-Based Malware Pattern Analysis; VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 9-16. Doi: 10.1145/2671491.2671498 Behavior-based analysis of emerging malware families involves finding suspicious patterns in large collections of execution traces. This activity cannot be automated for previously unknown malware families and thus malware analysts would benefit greatly from integrating visual analytics methods in their process. However existing approaches are limited to fairly static representations of data and there is no systematic characterization and abstraction of this problem domain. Therefore we performed a systematic literature study, conducted a focus group as well as semi-structured interviews with 10 malware analysts to elicit a problem abstraction along the lines of data, users, and tasks. The requirements emerging from this work can serve as basis for future design proposals to visual analytics-supported malware pattern analysis.
Keywords: evaluation, malicious software, malware analysis, problem characterization and abstraction, visual analytics (ID#: 15-3583)
URL: http://doi.acm.org/10.1145/2671491.2671498
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: IEEE World Congress on Services (2014) Alaska |
The 2014 IEEE World Congress on Services (SERVICES) was held at Anchorage, Alaska on June 27 2014-July 2, 2014. This Congress included four core conferences- the IEEE International Conference on Web Services (ICWS 2014); the IEEE International Conference on Cloud Computing (CLOUD 2014); the IEEE International Conference on Services Computing (SCC 2014); and the IEEE International Conference on Mobile Services (MS 2014) and hosted the third IEEE International Congress on Big Data (BigData 2014).
The works cited here are science of security-related.
Taherimakhsousi, N.; Muller, H.A., "Context-Based Face Recognition for Smart Web Tasking Applications," Services (SERVICES), 2014 IEEE World Congress on , vol., no., pp.21,23, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.14 This position paper illustrates applications of a context-based face recognition system for smart web tasking. Context-based face recognition can provide a personalized service based on recognition face and derived context information. Using selected smart applications, we show how context-based face recognition system could help deliver personalized services.
Keywords: Internet; face recognition; ubiquitous computing; context-based face recognition system; personalized service; smart Web tasking applications; Conferences; Context; Face; Face recognition; Image recognition; Media; Mobile communication; commercial video chat; context-aware; face recognition; web-based class environment (ID#: 15-3500)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903238&isnumber=6903223
Murugesan, P.; Ray, I., "Audit Log Management in MongoDB," Services (SERVICES), 2014 IEEE World Congress on, pp.53,57, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.19 In the past few years, web-based applications and their data management needs have changed dramatically. Relational databases are often being replaced by other viable alternatives, such as NoSQL databases, for reasons of scalability and heterogeneity. MongoDB, a NoSQL database, is an agile database built for scalability, performance and high availability. It can be deployed in single server environment and also on complex multi-site architectures. MongoDB provides high performance for read and write operations by leveraging in-memory computing. Although researchers have motivated the need for MongoDB, not much appears in the area of log management. Efficient log management techniques are needed for various reasons including security, accountability, and improving the performance of the system. Towards this end, we analyze the different logging methods offered by MongoDB and compare them to the NIST standard. Our analysis indicates that profiling and mongosniff are useful for log management and we present a simple model that combines the two techniques.
Keywords: Internet; database management systems; MongoDB; NIST standard; NoSQL databases; Web-based applications; agile database; audit log management; complex multisite architectures; data management; log management techniques; mongosniff; single server environment; Indexes; Monitoring; NIST; Security; Servers; Audit Trail; Log Management; MongoDB; NoSQL (ID#: 15-3501)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903243&isnumber=6903223
Sen, A.; Madria, S., "Off-Line Risk Assessment of Cloud Service Provider," Services (SERVICES), 2014 IEEE World Congress on, pp.58,65, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.20 The acceptance of cloud as a platform to migrate applications has seen a boom in the past few decades. Hosting applications on the cloud cuts down its maintenance and infrastructure costs. Nonetheless security of these applications on the cloud is one of the primary concerns which prevents complete adoption of cloud. Although cloud provides security, they do not address it in terms of application security and thus organizations cannot fully comprehend them. In this paper, we propose an off-line risk assessment framework to evaluate a cloud service provider's security from the point of view of an application to be migrated there. Once the most secure cloud service provider is determined for an application, the framework will perform a cost-benefit tradeoff analysis to estimate an optimal cloud migration strategy.
Keywords: cloud computing; security of data; cloud service provider security; cost-benefit tradeoff analysis ;infrastructure costs; offline risk assessment; optimal cloud migration strategy; secure cloud service provider; Computer crime; Motion pictures; Ontologies; Organizations; Risk management; System analysis and design; cloud migration; cloud service provider; cost-benefit tradeoff analysis; risk assessment; vulnerability assessment (ID#: 15-3502)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903244&isnumber=6903223
llin, C.; Haney, M., "Preventing the Mistraining of Anomaly-Based IDSs through Ensemble Systems," Services (SERVICES), 2014 IEEE World Congress on, pp.66, 68, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.21 The security of cloud networks is heavily contingent upon their ability to detect incoming attacks. An Intrusion Detection System (IDS) monitors a network for precisely this purpose. IDSs fall into one of two categories: signature-based and anomaly-based IDSs. Whereas signature-based IDSs rely upon pre-programmed matching rules designed by security experts and are therefore limited to pre-existing attacks in their coverage, anomaly-based IDSs attempt to identify normal and abnormal traffic, generally using machine learning, and therefore hold the promise of being able to identify novel attacks. Anomaly-based IDSs can be divided into IDSs that are trained online and IDSs that are trained offline. While IDSs that are trained online allow greater flexibility, such IDSs could be trained by an adversary to allow specific attacks. This work-in-progress paper proposes a methodology for protecting against the mistraining of an IDS trained online. Two IDSs begin with identical rule sets, but one is allowed to adjust its data to include online data while the other remains static. Both systems can report anomalies, and if the online IDS attempts to let through too much that the offline IDS does not, the decision boundaries of the online IDS are adjusted as a safeguard against mistraining. An experiment for testing the approach is proposed.
Keywords: cloud computing; digital signatures; anomaly-based IDS; cloud networks; ensemble systems; intrusion detection system; security; signature-based IDS; Educational institutions; Intrusion detection; Machine learning algorithms; Training; Training data; information security; intrusion detection; machine learning algorithms (ID#: 15-3503)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903245&isnumber=6903223
Felici, M.; Pearson, S., "Accountability, Risk, and Trust in Cloud Services: Towards an Accountability-Based Approach to Risk and Trust Governance," Services (SERVICES), 2014 IEEE World Congress on, pp.105, 112, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.29 In this paper we propose an approach for enhanced data protection in the cloud, based upon accountability governance. Specifically, the relationships between accountability, risk and trust are analyzed in order to suggest characteristics and means to address data governance issues involved when organizations or individuals adopt cloud computing. This analysis takes into account insights from a variety of stakeholders within cloud ecosystems obtained by running an elicitation workshop.
Keywords: cloud computing; risk management ;trusted computing; accountability governance; accountability-based approach; cloud computing; cloud ecosystems; cloud services; data governance; data protection; elicitation workshop; risk; trust governance; Context; Ecosystems; Law; Organizations; Risk management; Security; Standards organizations; accountability; cloud computing; risk; trust (ID#: 15-3504)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903252&isnumber=6903223
Hale, M.; Gamble, R., "Toward Increasing Awareness of Suspicious Content through Game Play," Services (SERVICES), 2014 IEEE World Congress on, pp.113, 120, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.30 Phishing, elicitation, and impersonation techniques are performed using multiple forms, targeting content specific to the delivery modality, such as email, social media, and general browser communications. Education to increase awareness is one mechanism to combat phishing. Average email and internet users are less attentive to media warnings and training materials provided by employers than they are in interactive environments. In this paper, we overview a game concept that immerses users in a role play challenge where they must send email, use social media, and browse the web and determine whether content received within these modalities is trustworthy or not. The game, built as a Javascript framework, simulates phishing scams, measures trust and suspicion levels, and individualizes training for users. The game architecture employs components that facilitate dynamic content generation in each of the modalities, customize experiment design for specific assessment and training, and perform sophisticated tracking for automated analysis of user trust content assessments. We discuss the game content, the specific requirements the game must comply with, and the experiments to be conducted using the game.
Keywords: computer based training; message authentication; serious games (computing);social networking (online);unsolicited e-mail; Internet; Javascript framework; dynamic content generation; elicitation technique; email; game play; impersonation technique; phishing scams; role play challenge; social media; suspicious content; user trust content assessment; Browsers; Companies; Degradation; Electronic mail; Games; Media; Training; assessment; awareness; game; phishing; security (ID#: 15-3505)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903253&isnumber=6903223
Takahashi, T.; Kannisto, J.; Harju, J.; Kanaoka, A.; Takano, Y.; Matsuo, S., "Expressing Security Requirements: Usability of Taxonomy-Based Requirement Identification Scheme," Services (SERVICES), 2014 IEEE World Congress on, pp.121,128, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.31 Users want to enjoy online services without sacrificing their security. Although there is a trade-off between the security of a service and its usability, the level of security required will differ depending on the user and the situation. To optimize the balance between security and usability, it can be customized for each user and each online transaction. Yet in order to do that, both users and service providers need to stipulate their security requirements. We have been working on a framework that provides security requirement classifications in multiple dimensions to help users identify and select their security requirements, and then apply these requirements to different dimensions. This paper shows how we implemented this framework and then evaluated it by conducting a user study along with our implementation. The study verifies that ordinary users without any particular technical knowledge prefer to clarify their security requirements using a taxonomy-based selection scheme (our scheme) as opposed to a free-form input scheme. It also discusses the coverage of pre-defined taxonomies and users' requirements. Through this study, we clarify the future direction of our research.
Keywords: human factors; information services; security of data; systems analysis; free-form input scheme; online services; online transaction; pre-defined taxonomies; security requirements; service providers ;taxonomy-based requirement identification scheme usability; taxonomy-based selection scheme; user requirements; user study; Computers; Educational institutions; Electronic mail; Prototypes; Security; Taxonomy; Usability; security requirement; taxonomy; usability; user study (ID#: 15-3506)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903254&isnumber=6903223
Todoran, I.; Glinz, M., "Quest for Requirements: Scrutinizing Advanced Search Queries for Cloud Services with Fuzzy Galois Lattices," Services (SERVICES), 2014 IEEE World Congress on, pp.234, 241, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.49 In software and requirements engineering, requirements elicitation is considered an essential step towards building successful systems. Despite extensive existing research in the field of distributed requirements engineering, the topic of requirements elicitation for cloud systems remains still uncovered. Cloud challenges (e.g., heterogeneous and globally distributed users, volatile requirements, frequent change requests) cannot always be satisfied by existing methods. We present a new approach for eliciting requirements for cloud services by analyzing advanced search queries. Our approach builds fuzzy Galois lattices for the terms that compose advanced search queries, thus enabling a thorough analysis of stored search data. This can support cloud providers in observing requirements clusters and new classes of cloud services, identifying the threshold for achieving satisfied consumers with a minimal set of requirements implemented, and thus designing novel solutions, based on market trends. Moreover, the Galois lattices approach enables large-scale consumers' involvement and ensures the elicitation of real requirements unobtrusively.
Keywords: cloud computing; fuzzy set theory; query formulation; query processing; cloud services; cloud systems; fuzzy Galois lattices; requirements elicitation; requirements engineering; search data; search queries; Cloud computing; Context; Encryption; Lattices; Mobile communication; Reliability; Galois lattice; advanced search query; cloud computing; data analysis; requirements elicitation (ID#: 15-3507)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903271&isnumber=6903223
Rosa, T.A.; Donizetti Zorzo, S., "Model of Location-Sharing-Based Services with Privacy Guarantee," Services (SERVICES), 2014 IEEE World Congress on, pp.271,278, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.56 The mobile devices can perform many tasks including the processing of complex calculations, reproduction of high quality media and connection with the Internet. These tasks enable many new services to users which explore their locations in order to provide, for instance, information about the weather forecast, traffic monitoring, among others. Services which use information about location of users are called Location-Based Services (LBS). These services can also group users according to the geographical region and they are called Location-Sharing-Based Services(LSBS). The main feature of LSBS is that it explores the information from a group of users and not just from individuals, offering services based on the group position. However, with these services, users are subject to several threats to their privacy. This article presents the implementation of a model of LSBS with privacy guarantees. The model is based on levels and it guarantees not only the privacy of the group but also the privacy of each one inside the group. This guarantee is due to homomorphic encryption and privacy techniques like anonymity. Tests were performed aiming at developing this model. The results show that it is viable the use of model of LSBS in real devices.
Keywords: Internet; cryptography; data privacy; mobile computing; Internet; LSBS; anonymity; complex calculations; group position; group privacy; homomorphic encryption; location-sharing-based services; mobile devices; privacy guarantee; Accuracy; Data privacy; Encryption; Performance evaluation; Privacy; Reliability; Location-Based Services; Location-Sharing-Based Services; Privacy (ID#: 15-3508)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903278&isnumber=6903223
Cecchinel, C.; Jimenez, M.; Mosser, S.; Riveill, M., "An Architecture to Support the Collection of Big Data in the Internet of Things," Services (SERVICES), 2014 IEEE World Congress on, pp.442, 449, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.83 The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users.
Keywords: Big Data; Internet of Things; cloud computing; software architecture; Big Data; Internet of Things; IoT; SMARTCAMPUS; SophiaTech campus; cloud-based system; sensor-based data; software architecture; Big data; Bridges; Computer architecture; Middleware; Temperature measurement; Temperature sensors; Architecture; Data collection; Distributed Computing; Sensors; Software Engineering (ID#: 15-3509)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903302&isnumber=6903223
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Information Hiding and Multimedia Security Workshop, 2014, Salzburg |
The ACM Information Hiding and Multimedia Security Workshop was held in Salzburg, Austria on June 11 - 13, 2014. The call for papers attracted 64 submissions from Asia, South America, the United States, and Europe. The program committee accepted 24 papers covering a variety of iopics. The program included invited talks on JPEG security standardization and the EU FP7 FastPass project and several special sessions (Security and Privacy Technologies for Intelligent Energy Networks, Security and Robustness in Biometrics, Forensic and Biometric Challenges in Information Hiding and Media Security, and HEVC, H.264, and JPEG Security). The papers presented below were published by ACM. The majority of the papers were published commercially and are not available for this list. Interested persons can consult the ACM digital library to find the additional materials published by Springer and Kluwer.
Thijs Laarhoven; Capacities and Capacity-Achieving Decoders For Various Fingerprinting Games; IH&MMSec '14 Proceedings of the 2nd ACM Workshop On Information Hiding And Multimedia Security, June 2014, Pages 123-134. Doi: 10.1145/2600918.2600925 Combining an information-theoretic approach to fingerprinting with a more constructive, statistical approach, we derive new results on the fingerprinting capacities for various informed settings, as well as new log-likelihood decoders with provable code lengths that asymptotically match these capacities. The simple decoder built against the interleaving attack is further shown to achieve the simple capacity for unknown attacks, and is argued to be an improved version of the recently proposed decoder of Oosterwijk et al. With this new universal decoder, cut-offs on the bias distribution function can finally be dismissed. Besides the application of these results to fingerprinting, a direct consequence of our results to group testing is that (i) a simple decoder asymptotically requires a factor 1.44 more tests to find defectives than a joint decoder, and (ii) the simple decoder presented in this paper provably achieves this bound.
Keywords: collusion-resistance, fingerprinting, group testing, information theory, log-likelihood ratios, traitor tracing (ID#: 15-3532)
URL: http://doi.acm.org/10.1145/2600918.2600925
Tong Qiao, Cathel Ziitmann, Rémi Cogranne, Florent Retraint; Detection of JSteg Algorithm Using Hypothesis Testing Theory And A Statistical Model With Nuisance Parameters; IH&MMSec '14 Proceedings of the 2nd ACM Workshop On Information Hiding And Multimedia Security, June 2014,Pages 3-13. Doi: 10.1145/2600918.2600932 This paper investigates the statistical detection of data hidden within DCT coefficients of JPEG images using a Laplacian distribution model. The main contributions is twofold. First, this paper proposes to model the DCT coefficients using a Laplacian distribution but challenges the usual assumption that among a sub-band all the coefficients follow are independent and identically distributed (i.i.d). In this paper it is assumed that the distribution parameters change from DCT coefficient to DCT coefficient. Second this paper applies this model to design a statistical test, based on hypothesis testing theory, which aims at detecting data hidden within DCT coefficient with the JSteg algorithm. The proposed optimal detector carefully takes into account the distribution parameters as nuisance parameters. Numerical results on simulated data as well as on numerical images database show the relevance of the proposed model and the good performance of the ensuing test.
Keywords: dct coefficients, hypothesis testing theory, optimal detection, statistical modelling, steganalysis, steganography (ID#: 15-3533)
URL: http://doi.acm.org/10.1145/2600918.2600932
Tomäš Pevný, Andrew D. Ker; Steganographic Key Leakage Through Payload Metadata; IH&MMSec '14 Proceedings of the 2nd ACM Workshop On Information Hiding And Multimedia Security, June 2014,Pages 109-114. Doi: 10.1145/2600918.2600921 The only steganalysis attack which can provide absolute certainty about the presence of payload is one which finds the embedding key. In this paper we consider refined versions of the key exhaustion attack exploiting metadata such as message length or decoding matrix size, which must be stored along with the payload. We show simple errors of implementation lead to leakage of key information and powerful inference attacks; furthermore, complete absence of information leakage seems difficult to avoid. This topic has been somewhat neglected in the literature for the last ten years, but must be considered in real-world implementations.
Keywords: bayesian inference, brute-force attack, key leakage, steganographic security (ID#: 15-3534)
URL: http://doi.acm.org/10.1145/2600918.2600921
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Software Security and Reliability (2014) San Francisco |
The 2014 Eighth International Conference on Software Security and Reliability (SERE) was held June 30 2014-July 2 2014 in San Francisco, California. SERE 2014 brought together researchers and practitioners of software security and reliability and had 26 paper presentations. The Science of Security-related papers are cited here.
Farhadi, M.R.; Fung, B.C.M.; Charland, P.; Debbabi, M., "BinClone: Detecting Code Clones in Malware," Software Security and Reliability, 2014 Eighth International Conference on, pp.78, 87, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.21 To gain an in-depth understanding of the behaviour of a malware, reverse engineers have to disassemble the malware, analyze the resulting assembly code, and then archive the commented assembly code in a malware repository for future reference. In this paper, we have developed an assembly code clone detection system called BinClone to identify the code clone fragments from a collection of malware binaries with the following major contributions. First, we introduce two deterministic clone detection methods with the goals of improving the recall rate and facilitating malware analysis. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we evaluate our proposed clone detection methods on real-life malware binaries. To the best of our knowledge, this is the first work that studies the problem of assembly code clone detection for malware analysis.
Keywords: invasive software; program diagnostics; reverse engineering; Bin Clone; BinClone; assembly code analysis; assembly code clone detection system ;code clone fragment identification; commented assembly code archiving; deterministic clone detection method; inexact clone discovery; malware analysis; malware behaviour understanding; malware binaries; malware disassembly; malware repository; recall rate; reverse engineers; token normalization level; Assembly; Cloning; Detectors; Feature extraction; Malware; Registers; Vectors; Assembly Code Clone Detection; Binary Analysis; Malware Analysis; Reverse Engineering (ID#: 15-3510)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895418&isnumber=6895396
Zech, P.; Felderer, M.; Katt, B.; Breu, R., "Security Test Generation by Answer Set Programming," Software Security and Reliability, 2014 Eighth International Conference on, pp.88,97, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.22 Security testing still is a hard task, especially if focusing on non-functional security testing. The two main reasons behind this are, first, at the most a lack of the necessary knowledge required for security testing, and second, managing the almost infinite amount of negative test cases, which result from potential security risks. To the best of our knowledge, the issue of the automatic incorporation of security expert knowledge, e.g., known vulnerabilities, exploits and attacks, in the process of security testing is not well considered in the literature. Furthermore, well-known "de facto" security testing approaches, like fuzzing or penetration testing, lack systematic procedures regarding the order of execution of test cases, which renders security testing a cumbersome task. Hence, in this paper we propose a new method for generating negative security tests by logic programming, which applies a risk analysis to establish a set of negative requirements for later test generation.
Keywords: logic programming; program testing; risk analysis; safety-critical software; answer set programming; logic programming; negative requirements; negative security tests; nonfunctional security testing; risk analysis; security expert knowledge; security risks; security test generation; Logic programming; Risk analysis; Security; Semantics; Software; Testing; Unified modeling language; Answer Set Programming; Knowledge Representation; Logic Programming; Security Engineering; Security Testing; Software Testing; Test Generation (ID#: 15-3511)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895419&isnumber=6895396
Herscheid, L.; Tröger, P., "Specification of Dynamic Fault Tree Concepts with Stochastic Petri Nets," Software Security and Reliability, 2014 Eighth International Conference on, pp.177, 186, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.31 Dependability modeling describes a set of approaches for analyzing the reliability of software and hardware systems. The most prominent approach are fault trees, which hierarchically express the causal dependencies between basic faults and an undesired failure event. Dynamic fault trees allow to express sequence-dependent error propagation, which is commonly found in software systems. In this paper, we present a complete behavioral specification of well-known dynamic fault tree concepts. We provide a novel connection rule definition for all commonly accepted node types, in combination with a description of their behavioral semantics in generalized stochastic petri nets. Both specifications together are not available in literature so far. The application of these specifications in fault tree generation and modeling tools can help to prevent syntactical and semantical ambiguity in the generated output.
Keywords: Petri nets; fault tolerant computing; fault trees; formal specification; software reliability; stochastic processes; behavioral semantics; behavioral specification; connection rule; dependability modeling; dynamic fault tree; failure event; semantical ambiguity; sequence-dependent error propagation; software reliability; stochastic Petri nets; syntactical ambiguity; Artificial neural networks; Fault trees; Logic gates; Petri nets; Semantics; Software; Stochastic processes; Dependability Modeling; Fault tolerant systems; Fault trees; Petri nets; Software reliability (ID#: 15-3512)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895428&isnumber=6895396
Yen-Ju Liu; Chong-Kuan Chen; Cho, M.C.Y.; Shiuhpyng Shieh, "Fast Discovery of VM-Sensitive Divergence Points with Basic Block Comparison," Software Security and Reliability, 2014 Eighth International Conference on, pp.196,205, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.33 To evade VM-based malware analysis systems, VM-aware malware equipped with the ability to detect the presence of virtual machine has appeared. To cope with the problem, detecting VM-aware malware and locating VM-sensitive divergence points of VM-aware malware is in urgent need. In this paper, we propose a novel block-based divergence locator. In contrast to the conventional instruction-based schemes, the block-based divergence locator divides malware program into basic blocks, instead of binary instructions, and uses them as the analysis unit. The block-based divergence locator significantly decrease the cost of behavior logging and trace comparison, as well as the size of behavior traces. As the evaluation showed, behavior logging is 23.87-39.49 times faster than the conventional schemes. The total number of analysis unit, which is highly related to the cost of trace comparisons, is 11.95%-16.00% of the conventional schemes. Consequently, VM-sensitive divergence points can be discovered more efficiently. The correctness of our divergence point discovery algorithm is also proved formally in this paper.
Keywords: invasive software; virtual machines; VM-based malware analysis systems; VM-sensitive divergence points; basic block comparison; binary instructions; block-based divergence locator; virtual machine; Emulation; Hardware; Indexes; Malware; Timing; Virtual machining; Virtualization; Malware Behavior Analysis; VM-Aware Malware; Virtual Machine (ID#: 15-3513)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895430&isnumber=6895396
Mell, P.; Harang, R.E., "Using Network Tainting to Bound the Scope of Network Ingress Attacks," Software Security and Reliability, 2014 Eighth International Conference on, pp.206,215, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.34 This research describes a novel security metric, network taint, which is related to software taint analysis. We use it here to bound the possible malicious influence of a known compromised node through monitoring and evaluating network flows. The result is a dynamically changing defense-in-depth map that shows threat level indicators gleaned from monotonically decreasing threat chains. We augment this analysis with concepts from the complex networks research area in forming dynamically changing security perimeters and measuring the cardinality of the set of threatened nodes within them. In providing this, we hope to advance network incident response activities by providing a rapid automated initial triage service that can guide and prioritize investigative activities.
Keywords: network theory (graphs); security of data; defense-in-depth map; network flow evaluation; network flow monitoring; network incident response activities; network ingress attacks; network tainting metric; security metric; security perimeters; software taint analysis ;threat level indicators; Algorithm design and analysis; Complex networks; Digital signal processing; Measurement; Monitoring; Security; Software; complex networks; network tainting; scale-free; security (ID#: 15-3514)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895431&isnumber=6895396
Howser, G.; McMillin, B., "A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust," Software Security and Reliability, 2014 Eighth International Conference on, pp.225, 234, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.36 Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.
Keywords: security of data; trusted computing; CPS; MSDND; Stuxnet attacks; belief manipulation; cyber physical system; cyber security; cyber-physical systems; electronic monitors; event system analysis; human operators; implicit trust; information flow disruption attacks; modal frames; modal model; multiple security domains nondeducibility; security models; trust state manipulation; Analytical models; Bismuth; Cognition; Cost accounting; Monitoring; Security; Software; Stuxnet; cyber-physical systems; doxastic logic; information flow security; nondeducibility; security models (ID#: 15-3515)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895433&isnumber=6895396
Hsiao-Ying Lin; Li-Ping Tung; Lin, B.S.P., "Reliable Repair Mechanisms with Low Connection Cost for Code Based Distributed Storage Systems," Software Security and Reliability, 2014 Eighth International Conference on, pp.235,244, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.37 Erasure codes are applied in distributed storage systems for fault-tolerance with lower storage overhead than replications. Later, decentralized erasure codes are proposed for decentralized or loosely-organized storage systems. Repair mechanisms aim at maintaining redundancy over time such that stored data are still retrievable. Two recent repair mechanisms, Noop and Coop, are designed for decentralized erasure code based distributed storage system to minimize connection cost in theoretical manner. We propose a generalized repair framework, which includes Noop and Coop as two extreme cases. We then investigate trade-off between connection cost and data retrievability from an experimental aspect in our repair framework. Our results show that a reasonable data retrievability is achievable with constant connection cost, which is less than previously analytical values. These results are valuable references for a system manager to build a reliable storage system with low connection cost.
Keywords: software fault tolerance; software maintenance; storage management; Coop repair mechanism; Noop repair mechanism; code based distributed storage system; decentralized erasure codes; erasure codes; fault-tolerance; low connection cost; reliable repair mechanism; Analytical models; Data models; Encryption; Maintenance engineering; Mathematical model; Reliability; Servers; Erasure codes; code based distributed storage systems; data retrievability; fault tolerance; regenerating codes (ID#: 15-3516)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895434&isnumber=6895396
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Symposium on Resilient Control Systems (ISRCS) |
The 7th International Symposium on Resilient Control Systems (ISRCS), 2014 was held 19-21 Aug. 2014 in Denver, Colorado. This conference offered research presentations of interest to both the Science of Security and SURE projects.
Thompson, M.; Evans, N.; Kisekka, V., "Multiple OS Rotational Environment An Implemented Moving Target Defense," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900086 Cyber-attacks continue to pose a major threat to existing critical infrastructure. Although suggestions for defensive strategies abound, Moving Target Defense (MTD) has only recently gained attention as a possible solution for mitigating cyber-attacks. The current work proposes a MTD technique that provides enhanced security through a rotation of multiple operating systems. The MTD solution developed in this research utilizes existing technology to provide a feasible dynamic defense solution that can be deployed easily in a real networking environment. In addition, the system we developed was tested extensively for effectiveness using CORE Impact Pro (CORE), Nmap, and manual penetration tests. The test results showed that platform diversity and rotation offer improved security. In addition, the likelihood of a successful attack decreased proportionally with time between rotations.
Keywords: operating systems (computers); security of data; CORE; CORE Impact Pro; MTD technique; Nmap; cyber-attacks mitigation; defensive strategies; manual penetration test; moving target defense; multiple OS rotational environment; operating systems; Availability; Fingerprint recognition; IP networks; Operating systems; Security; Servers; Testing; insert (ID#: 15-3517)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900086&isnumber=6900080
Ostovari, P.; Jie Wu; Ying Dai, "Priority-Based Broadcasting Of Sensitive Data In Error-Prone Wireless Networks," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900087 Providing reliable transmission in wireless communication networks is an important problem which is typically addressed using feedback and acknowledgment messages. In the networks where using feedbacks is not possible, such as real-time systems, an alternative approach is to maximize the possible gain that the destination nodes are expected to receive. In this paper, we consider transmission of data with different priorities, and study the problem of maximizing the total gain in the case that partial data retrieval is acceptable. We propose an optimal solution that benefits from network coding. We also consider the case of burst errors and discuss how can we make our proposed method robust to this type of error. We evaluate our proposed priority-based data transmission method using both simulations and results from the implementation on a USRP testbed.
Keywords: network coding; radio data systems; radio networks; telecommunication network reliability; USRP testbed; burst errors; destination nodes; error-prone wireless communication networks;n etwork coding; partial data retrieval; priority-based data transmission method; priority-based sensitive data broadcasting; total gain maximization problem; transmission reliability; Broadcasting; Encoding; Error analysis; Gain; Network coding; Reliability; Wireless networks; Symbol-level coding; USRP testbed; broadcasting; burst error; priority; random linear network coding; reliability; wireless networks (ID#: 15-3518)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900087&isnumber=6900080
Duff, S.; Del Guidice, K.; Flint, J.; Nam Nguyen; Kudrick, B., "The Diagnosis And Measurement Of Team Resilience In Sociotechnical Systems," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 5, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900088 This paper presents a novel approach to diagnosing and measuring team resilience in sociotechnical systems. This approach is based on a multi-level model developed to study team phenomena from a general systems perspective. We will describe a methodology that uses a concept similar to flow within the psychological literature to measure a team's response to instances of sub-optimal system function. Team resilience is determined by examining flow disruptions, which are instances of sub-optimal system performance that disrupt normal system flow, and compensatory strategies, which are behaviors enacted by the team in response to the disruption to re-establish overall system flow. Approaching teams embedded in organizations from this perspective allows diagnosis of the systemic influences that contribute most to the variance in performance across entities, identification of pervasive latent systemic failures, and the development of a tailored taxonomy of behavioral teamwork dimensions, which can then be translated into metrics to measure team resilience in many contexts or team configurations.
Keywords: psychology; team working; behavioral teamwork dimensions; compensatory strategies; flow disruptions; general systems perspective; multilevel model; normal system flow; pervasive latent systemic failure identification; psychological literature; sociotechnical systems; suboptimal system function; suboptimal system performance; systemic influence diagnosis; tailored taxonomy; team phenomena; team resilience diagnosis; team resilience measurement; team response measurement; Fluid flow measurement;Organizations;Resilience;Robots;Taxonomy;Teamwork;group flow; multi-level model; resilience ;teams; teamwork measurement (ID#: 15-3519)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900088&isnumber=6900080
Atighetchi, M.; Adler, A., "A Framework For Resilient Remote Monitoring," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1, 8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900090 Today's activities in cyber space are more connected than ever before, driven by the ability to dynamically interact and share information with a changing set of partners over a wide variety of networks. To support dynamic sharing, computer systems and network are stood up on a continuous basis to support changing mission critical functionality. However, configuration of these systems remains a manual activity, with misconfigurations staying undetected for extended periods, unneeded systems remaining in place long after they are needed, and systems not getting updated to include the latest protections against vulnerabilities. This environment provides a rich environment for targeted cyber attacks that remain undetected for weeks to months and pose a serious national security threat. To counter this threat, technologies have started to emerge to provide continuous monitoring across any network-attached device for the purpose of increasing resiliency by virtue of identifying and then mitigating targeted attacks. For these technologies to be effective, it is of utmost importance to avoid any inadvertent increase in the attack surface of the monitored system. This paper describes the security architecture of Gestalt, a next-generation cyber information management platform that aims to increase resiliency by providing ready and secure access to granular cyber event data available across a network. Gestalt's federated monitoring architecture is based on the principles of strong isolation, least-privilege policies, defense-in-depth, crypto-strong authentication and encryption, and self-regeneration. Remote monitoring functionality is achieved through an orchestrated workflow across a distributed set of components, linked via a specialized secure communication protocol, that together enable unified access to cyber observables in a secure and resilient way.
Keywords: Web services; information management; security of data; Gestalt platform; attack identification; attack mitigation; communication protocol; computer networks; computer systems; cyber attacks;cyber observables; cyber space; granular cyber event data; mission critical functionality; national security threat; network-attached device; next-generation cyber information management platform; remote monitoring functionality; resilient remote monitoring; Bridges; Firewalls (computing);Monitoring; Protocols; Servers; XML; cyber security; federated access; middleware; semantic web}, (ID#: 15-3520)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900090&isnumber=6900080
Fink, G.A.; Griswold, R.L.; Beech, Z.W., "Quantifying Cyber-Resilience Against Resource-Exhaustion Attacks," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900093 Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
Keywords: queueing theory; security of data; cyber security; cyber-resilience quantification; engineering terms; information sciences; linguistic terms; mechanical engineering; networked queuing systems; queuing theory; resource-exhaustion attacks; simple queuing system; stress-strain curve; Information systems; Queueing analysis; Resilience; Servers; Strain; Stress; Resilience; cyber systems; information science; material science; strain; stress (ID#: 15-3521)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900093&isnumber=6900080
Khamis, A.; Subbaram Naidu, D., "Real-time Algorithm For Nonlinear Systems With Incomplete State Information Using Finite-Horizon Optimal Control Technique," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900094 This paper discusses a novel efficient real-time technique used for finite-horizon nonlinear regulator problems with incomplete state information. This technique based on integrating the Kalman filter algorithm and the finite-horizon differential State Dependent Riccati Equation (SDRE) technique. In this technique, the optimal control problem of the nonlinear system is solved by using finite-horizon differential SDRE algorithm, which makes this technique effective for a wide range of operating points. A nonlinear mechanical crane is given to show the effectiveness of the proposed technique.
Keywords: Kalman filters; Lyapunov methods; Riccati equations; nonlinear control systems; optimal control; stochastic systems; Kalman filter algorithm; SDRE technique; finite-horizon differential state dependent Riccati equation; finite-horizon nonlinear regulator; finite-horizon optimal control technique; incomplete state information; nonlinear mechanical crane; nonlinear systems; Cranes; Equations; Kalman filters; Mathematical model; Noise; Nonlinear systems; Optimal control (ID#: 15-3522)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900094&isnumber=6900080
Borges Hink, R.C.; Beaver, J.M.; Buckner, M.A.; Morris, T.; Adhikari, U.; Shengyi Pan, "Machine Learning For Power System Disturbance And Cyber-Attack Discrimination," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1, 8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900095 Power system disturbances are inherently complex and can be attributed to a wide range of sources, including both natural and man-made events. Currently, the power system operators are heavily relied on to make decisions regarding the causes of experienced disturbances and the appropriate course of action as a response. In the case of cyber-attacks against a power system, human judgment is less certain since there is an overt attempt to disguise the attack and deceive the operators as to the true state of the system. To enable the human decision maker, we explore the viability of machine learning as a means for discriminating types of power system disturbances, and focus specifically on detecting cyber-attacks where deception is a core tenet of the event. We evaluate various machine learning methods as disturbance discriminators and discuss the practical implications for deploying machine learning systems as an enhancement to existing power system architectures.
Keywords: learning (artificial intelligence); power engineering computing; power system faults; security of data; cyber-attack discrimination; machine learning; power system architectures; power system disturbance; power system operators; Accuracy; Classification algorithms; Learning systems; Protocols; Relays; Smart grids; SCADA; Smart grid; cyber-attack; machine learning (ID#: 15-3523)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900095&isnumber=6900080
Feng Xie; Yong Peng; Wei Zhao; Xuefeng Han; Hui Li; Ru Zhang; Jing Zhao; Jianyi Liu, "Using Simulation Platform To Analyze Radio Modem Security in SCADA," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,5, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900097 Radio modems are the most common long-range communication equipments in supervisory control and data acquisition (SCADA) systems such as water treatment plants and petrochemical factories. However, since there are lack of security mechanisms in radio modems, many traditional cyber attacks can have an impact on the data transmission via radio modems. In this paper, a simulation platform based on radio modems is built. And many attacks, e.g. communication jam, data eavesdropping and tamper as well as DOS attack, are carried out in this platform to test the security of radio modem. Experimental results indicate that there is something wrong in data transmission in SCADA systems when facing these cyber attacks, which means that some security measures should be applied to protect radio modems.
Keywords: SCADA systems; computer network security; jamming; modems; DOS attack; SCADA systems; communication jam;cyber attacks; data eavesdropping; data transmission; long-range communication equipments; petrochemical factories; radio modem protection; radio modem security analysis; security measures; simulation platform; supervisory control-and-data acquisition systems; tamper; water treatment plants; Computer crime; Data acquisition; Data communication; Modems; Monitoring; SCADA systems; SCADA systems; cyber attacks; radio modem; security; simulation platform (ID#: 15-3524)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900097&isnumber=6900080
Martins, G.; Bhattacharjee, A.; Dubey, A.; Koutsoukos, X.D., "Performance Evaluation Of An Authentication Mechanism In Time-Triggered Networked Control Systems," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1, 6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900098 An important challenge in networked control systems is to ensure the confidentiality and integrity of the message in order to secure the communication and prevent attackers or intruders from compromising the system. However, security mechanisms may jeopardize the temporal behavior of the network data communication because of the computation and communication overhead. In this paper, we study the effect of adding Hash Based Message Authentication (HMAC) to a time-triggered networked control system. Time Triggered Architectures (TTAs) provide a deterministic and predictable timing behavior that is used to ensure safety, reliability and fault tolerance properties. The paper analyzes the computation and communication overhead of adding HMAC and the impact on the performance of the time-triggered network. Experimental validation and performance evaluation results using a TTEthernet network are also presented.
Keywords: authorisation; computer network security; local area networks; networked control systems; HMAC; TTEthernet network; authentication mechanism; communication overhead; computation overhead; fault tolerance property; hash based message authentication; message confidentiality; message integrity; network data communication; reliability property; safety property; security mechanisms; time triggered architectures; time-triggered networked control systems; timing behavior; Cryptography; Message authentication;Receivers;Switches;Synchronization;HMAC;Performance Evaluation; Secure Messages; TTEthernet; Time-Trigger Architectures (ID#: 15-3525)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900098&isnumber=6900080
Bodeau, D.; Brtis, J.; Graubart, R.; Salwen, J., "Resiliency Techniques For Systems-Of-Systems Extending And Applying The Cyber Resiliency Engineering Framework To The Space Domain," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900099 This paper describes how resiliency techniques apply to an acknowledged system-of-systems. The Cyber Resiliency Engineering Framework is extended to apply to resilience in general, with a focus on resilience of space systems. Resiliency techniques can improve system-of-systems operations. Both opportunities and challenges are identified for resilience as an emergent property in an acknowledged system-of-systems.
Keywords: aerospace computing; security of data; cyber resiliency engineering framework; resiliency technique; space domain; system-of-systems operations; Collaboration; Dynamic scheduling; Interoperability; Monitoring; Redundancy; Resilience; Space vehicles; cyber security ;resilience; system-of-systems (ID#: 15-3526)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900099&isnumber=6900080
Abbas, W.; Vorobeychik, Y.; Koutsoukos, X., "Resilient Consensus Protocol In The Presence Of Trusted Nodes," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 7, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900100 In this paper, we propose a scheme for a resilient distributed consensus problem through a set of trusted nodes within the network. Currently, algorithms that solve resilient consensus problem demand networks to have high connectivity to overrule the effects of adversaries, or require nodes to have access to some non-local information. In our scheme, we incorporate the notion of trusted nodes to guarantee distributed consensus despite any number of adversarial attacks, even in sparse networks. A subset of nodes, which are more secured against the attacks, constitute a set of trusted nodes. It is shown that the network becomes resilient against any number of attacks whenever the set of trusted nodes form a connected dominating set within the network. We also study a relationship between trusted nodes and the network robustness. Simulations are presented to illustrate and compare our scheme with the existing ones.
Keywords: network theory (graphs);adversarial attacks; connected dominating set; nonlocal information access; resilient consensus protocol; resilient distributed consensus problem; trusted nodes notion; Buildings; Network topology; Protocols; Resilience; Robustness; Topology; Tree graphs; Resilience; adversary; consensus; dominating set; graph robustness (ID#: 15-3527)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900100&isnumber=6900080
Rege, A.; Ferrese, F.; Biswas, S.; Li Bai, "Adversary Dynamics And Smart Grid Security: A Multiagent System Approach," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,7, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900101 Power grid is the backbone of infrastructures that drive the US economy and security, which makes it a prime target of cybercriminals or state-sponsored terrorists, and warrants special attention for its protection. Commonly used approaches to smart grid security are usually based on various mathematical tools, and ignore the human behavior component of cybercriminals. This paper introduces a new dimension to the cyberphysical system architecture, namely human behavior, and presents a modified CPS framework, consisting of a. cyber system: SCADA control system and related protocols, b. physical system: power grid infrastructure, c. the adversary: cybercriminals, and d. the defender: system operators and engineers. Based on interviews of ethical hackers, this paper presents an adversary-centric method that uses adversary's decision tree along with control theoretic tools to develop defense strategies against cyberattacks on power grid.
Keywords: SCADA systems; computer crime; decision trees; multi-agent systems; power engineering computing; power system control; power system protection; power system security; protocols; smart power grids; SCADA control system; Smart Grid protection; US economy; US security; adversary-centric method; cyberattack; cybercriminals; cyberphysical system architecture; decision tree; ethical hackers; human behavior; mathematical tools; modified CPS framework; multiagent system approach; power grid; power grid infrastructure; protocols; smart grid security; Computer crime; Control systems; Decision making; Mathematical model; Power grids; Power system dynamics; Grid security; cyber attackers; cyberphysical systems; ethical hackers; human behavior (ID#: 15-3528)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900101&isnumber=6900080
Miles, C.; Lakhotia, A.; LeDoux, C.; Newsom, A.; Notani, V., "VirusBattle: State-of-the-Art Malware Analysis For Better Cyber Threat Intelligence," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900103 Discovered interrelationships among instances of malware can be used to infer connections among seemingly unconnected objects, including actors, machines, and the malware itself. However, such malware interrelationships are currently underutilized in the cyber threat intelligence arena. To fill that gap, we are developing VirusBattle, a system employing state-of-the-art malware analyses to automatically discover interrelationships among instances of malware. VirusBattle analyses mine malware interrelationships over many types of malware artifacts, including the binary, code, code semantics, dynamic behaviors, malware metadata, distribution sites and e-mails. The result is a malware interrelationships graph which can be explored automatically or interactively to infer previously unknown connections.
Keywords: computer viruses; data mining; graph theory; VirusBattle; binary; code semantics; cyber threat intelligence; distribution sites; dynamic behaviors; e-mails; malware analysis; malware artifacts; malware interrelationship mining; malware interrelationships graph; malware metadata; Computers; Data visualization; Electronic mail; Malware; Performance analysis; Semantics; Visualization (ID#: 15-3529)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900103&isnumber=6900080
Balchanos, M.G.; Domercant, J.C.; Tran, H.T.; Mavris, D.N., "Metrics-Based Analysis And Evaluation Framework For Engineering Resilient Systems," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,7, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900107 The DoD's ERS initiative calls for affordable, effective, and adaptable systems development. In support of this, a metrics-based analysis framework is introduced to address certain challenges for the design of future C2 military System-of-Systems (SoS). The interpretation of the concept of resilience, as well as a supporting threat analysis procedure for military SoS applications, have been the key driver for the evaluation of a system's ability to maintain its mission capability and health, when under attack due to given threats. An agent-based C2 UAV communication network application has been developed for the demonstration of the framework. Scenario-based case studies that involved communication jamming by the adversary forces are introduced for the evaluation the C2 system's response to a threat, including both degradation and recovery periods.
Keywords: autonomous aerial vehicles; command and control systems; Department of Defense ;DoD ERS initiative; agent-based C2 UAV communication network; command and control systems; communication jamming; engineering resilient systems; metrics-based analysis framework; metrics-based evaluation framework; military system-of-systems; mission capability; mission health; resilience concept; unmanned aerial vehicles; Facsimile; Jamming; Resilience; Robustness (ID#: 15-3530)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900107&isnumber=6900080
Rieger, C.G., "Resilient Control Systems Practical Metrics Basis For Defining Mission Impact," Resilient Control Systems (ISRCS), 2014 7th International Symposium onpp.1, 10, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900108 “Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integrity metrics can be applied to establish performance, and as well, the effects on the process that roll into the business case.
Keywords: control system synthesis; business case metrics; cyber-physical interdependency; mission impact; notional chemical processing plant; resilient control systems; risk mitigation; system integrity metrics; Computer aided software engineering; Computer crime; Control systems; Degradation; Measurement; Optimization; Robustness; Metrics; adaptive capacity; adaptive insufficience;cogntive;cyber-physical;performance;resilience;robustness;threats (ID#: 15-3531)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900108&isnumber=6900080
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Lablet Activities |
This section contains information on recent Lablet activities.
(ID#:14-3731)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Lablet Research Resilient Architecture |
EXECUTIVE SUMMARY: Over the past year the, NSA Science of Security Lablets engaged in NSA-approved research projects addressing the hard problem of Resilient Architectures. All but one of the eleven research projects done against this hard problem addressed other hard problems as well. CMU collaborated with Cornell, NCSU collaborated with UNCC and UVA, and UIUC collaborated with Illinois Institute of Technology. The projects are in various stages of maturity, and several have led to publications and/or conference presentations. Summaries of the projects, highlights and publications are presented below.
1. Geo-Temporal Characterization of Security Threats (CMU)
SUMMARY: Addresses the hard problems of Policy-Governed Secure Collaboration and Resilient Architectures; provides an empirical basis for assessment and validation of security models; provides a global model of flow of threats and associated information.
HIGHLIGHTS AND PUBLICATIONS
2. Multi-Modal Run-time Security Analysis (CMU)
SUMMARY: Hard Problems Addressed:
HIGHLIGHTS AND PUBLICATIONS
3. Security Reasoning for Distributed Systems with Uncertainty (CMU/Cornell Collaborative Proposal)
SUMMARY: Addresses the hard problems of scalability and composability and resilient architecture. We are interested in answering the question "Is my system sufficiently robust to both stochastic failures and deliberate attack"? Our methods will be helpful in designing and analyzing security polices for complicated systems.
HIGHLIGHTS AND PUBLICATIONS
4. Attack Surface and Defense-in-Depth Metrics (NCSU)
SUMMARY: Hard Problems Addressed:
HIGHLIGHTS AND PUBLICATIONS
5. Resilience Requirements, Design and Testing (NCSU, UNCC, UVA)
SUMMARY: Characterization of attack-resiliency of software needs to be done from its very inception because without such characterization attack resiliency is not properly testable or implementable. Resilient Architectures - vulnerability avoidance, evaluation and tolerance strategies and architectures. Security Metrics and Models - development of metrics and models for static and dynamic assessment of resilience of software.
HIGHLIGHTS AND PUBLICATIONS
6. Smart Isolation in Large-Scale Production Computing Infrastructures (NCSU)
SUMMARY: Resilient Architectures - Our current focus is the creation and validation of a taxonomy to study of existing isolation techniques, through which we will identify underlying principles that will lead to the design of next generation smart isolation techniques to support resilient architectures.
HIGHLIGHTS AND PUBLICATIONS
7. Systematization of Knowledge from Intrusion Detection Models (NCSU)
SUMMARY: Security Metrics and Models - The project aims to establish common criteria for evaluating and systematizing knowledge contributed by research on intrusion detection models. Resilient Architectures - Robust intrusion detection models serve to make large systems more resilient to attack. Scalability and Composability - Intrusion detection models deal with large data sets every day, so scale is always a significant concern. Humans - A key aspect of intrusion detection is interpreting the output and acting upon it, which inherently involves humans. Furthermore, intrusion detection models are ultimately simulations of human behavior.
HIGHLIGHTS AND PUBLICATIONS
8. Understanding Effects of Norms and Policies on the Robustness, Liveness, and Resilience of Systems (NCSU)
SUMMARY: Hard Problems Addressed:
HIGHLIGHTS and PUBLICATIONS
9. Vulnerability and Resilience Prediction Models (NCSU)
SUMMARY: Hard Problems Addressed:
HIGHLIGHTS and PUBLICATIONS
10. A Hypothesis-Testing Framework for Network Security (UIUC and Illinois Institute of Technology)
SUMMARY: Addresses four hard problems:
HIGHLIGHTS and PUBLICATIONS
11. Data-Driven Security Models and Analysis (UIUC)
SUMMARY: Hard Problems Addressed:
HIGHLIGHTS AND PUBLICATIONS
This quarter we focused on broadening our knowledge-base on attacks. Because our investigation is based on data-driven methodologies to create models and metrics used for monitoring, with the goal of recognizing, mitigating, and containing attacks.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:14-3730)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Acutator Security |
At the October Quarterly meeting of the Lablets at the University of Maryland, discussion about resiliency and composability identified the need to build secure sensors and actuators. The works cited here address the problems of actuator security and were presented or published in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Communications Security |
This collection of citations covers a range of issues in communications security from a theoretic or scientific level. Included are topics such as secure MSR regenerating codes, OS fingerprinting, jamming resilient codes, and irreducible pentanomials. These works were presented or published in 2014.
Sasidharan, B.; Kumar, P.V.; Shah, N.B.; Rashmi, K.V.; Ramachandran, K., "Optimality of the Product-Matrix Construction For Secure MSR Regenerating Codes," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp.10,14, 21-23 May 2014 doi: 10.1109/ISCCSP.2014.6877804 In this paper, we consider the security of exact-repair regenerating codes operating at the minimum-storage-regenerating (MSR) point. The security requirement (introduced in Shah et. al.) is that no information about the stored data file must be leaked in the presence of an eavesdropper who has access to the contents of ℓ1 nodes as well as all the repair traffic entering a second disjoint set of ℓ2 nodes. We derive an upper bound on the size of a data file that can be securely stored that holds whenever ℓ2 ≤ d - k + 1. This upper bound proves the optimality of the product-matrix-based construction of secure MSR regenerating codes by Shah et. al.
Keywords: encoding; matrix algebra; MSR point; data file; eavesdropper; exact repair regenerating code security; minimum storage regenerating point; product matrix; product matrix construction; repair traffic; secure MSR regenerating codes; Bandwidth; Data collection; Entropy; Maintenance engineering; Random variables; Security; Upper bound; MSR codes; Secure regenerating codes; product-matrix construction; regenerating codes (ID#: 15-3600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877804&isnumber=6877795
Gu, Y.; Fu, Y.; Prakash, A.; Lin, Z.; Yin, H., "Multi-Aspect, Robust, and Memory Exclusive Guest OS Fingerprinting," Cloud Computing, IEEE Transactions on vol. PP, no.99, pp. 1, 1, 11 July 2014. doi: 10.1109/TCC.2014.2338305 Precise fingerprinting of an operating system (OS) is critical to many security and forensics applications in the cloud, such as virtual machine (VM) introspection, penetration testing, guest OS administration, kernel dump analysis, and memory forensics. The existing OS fingerprinting techniques primarily inspect network packets or CPU states, and they all fall short in precision and usability. As the physical memory of a VM always exists in all these applications, in this article, we present OSSOMMELIER +, a multi-aspect, memory exclusive approach for precise and robust guest OS fingerprinting in the cloud. It works as follows: given a physical memory dump of a guest OS, OS-SOMMELIER+ first uses a code hash based approach from kernel code aspect to determine the guest OS version. If code hash approach fails, OS-SOMMELIER+ then uses a kernel data signature based approach from kernel data aspect to determine the version. We have implemented a prototype system, and tested it with a number of Linux kernels. Our evaluation results show that the code hash approach is faster but can only fingerprint the known kernels, and data signature approach complements the code signature approach and can fingerprint even unknown kernels.
Keywords: (not provided) (ID#: 15-3601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853383&isnumber=6562694
Liu, Yuanyuan; Cheng, Jianping; Zhang, Li; Xing, Yuxiang; Chen, Zhiqiang; Zheng, Peng, "A Low-Cost Dual Energy CT System With Sparse Data," Tsinghua Science and Technology , vol.19, no.2, pp.184,194, April 2014. doi: 10.1109/TST.2014.6787372 Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.
Keywords: Computed tomography; Detectors; Energy states; Image reconstruction; Reconstruction algorithms; TV; X-rays; ART-TV; X-ray imaging; dual energy CT system; material discrimination; reconstruction; sparse samples (ID#: 15-3602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787372&isnumber=6787360
Yao, H.; Silva, D.; Jaggi, S.; Langberg, M., "Network Codes Resilient to Jamming and Eavesdropping," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1,1, 3 Feb 2014. doi: 10.1109/TNET.2013.2294254 We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by $ Z_{O}$ ) and the number of links he can eavesdrop on (denoted by $ Z_{I}$) is less than the network capacity (denoted by $ C$) (i.e., $ Z_{O}+ Z_{I}< C$), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of $ C- Z_{O}$ when there are no security requirements, and codes that allow communication at the source rate of $ C- Z_{O}- Z_{I}$ while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Keywords: Error probability; Jamming; Network coding; Robustness; Transforms; Vectors; Achievable rates; adversary; error control; network coding; secrecy (ID#: 15-3603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730968&isnumber=4359146
Yang, J.-S.; Chang, J.-M.; Pai, K.-J.; Chan, H.-C., "Parallel Construction of Independent Spanning Trees on Enhanced Hypercubes," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no. 99, pp.1, 1, 5 Nov 2014. doi: 10.1109/TPDS.2014.2367498 The use of multiple independent spanning trees (ISTs) for data broadcasting in networks provides a number of advantages, including the increase of fault-tolerance, bandwidth and security. Thus, the designs of multiple ISTs on several classes of networks have been widely investigated. In this paper, we give an algorithm to construct ISTs on enhanced hypercubes Qn;k, which contain folded hypercubes as a subclass. Moreover, we show that these ISTs are near optimal for heights and path lengths. Let D(Qn;k) denote the diameter of Qn;k. If n k is odd or n k 2 f2; ng, we show that all the heights of ISTs are equal to D(Qn;k) + 1, and thus are optimal. Otherwise, we show that each path from a node to the root in a spanning tree has length at most D(Qn;k) + 2. In particular, no more than 2.15% of nodes have the maximum path length. As a by-product, we improve the upper bound of wide diameter (respectively, fault diameter) of Qn;k from these path lengths.
Keywords: Broadcasting; Educational institutions; Electronic mail; Fault tolerance; Fault tolerant systems; Hypercubes; Vegetation; enhanced hypercubes; fault diameter; folded hypercubes ;independent spanning trees; interconnection networks; wide diameter (ID#: 15-3604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6948321&isnumber=4359390
Ben Othman, S.; Trad, A.; Youssef, H., "Security Architecture For At-Home Medical Care Using Wireless Sensor Network," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp.304,309, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906374 Distributed wireless sensor network technologies have become one of the major research areas in healthcare industries due to rapid maturity in improving the quality of life. Medical Wireless Sensor Network (MWSN) via continuous monitoring of vital health parameters over a long period of time can enable physicians to make more accurate diagnosis and provide better treatment. The MWSNs provide the options for flexibilities and cost saving to patients and healthcare industries. Medical data sensors on patients produce an increasingly large volume of increasingly diverse real-time data. The transmission of this data through hospital wireless networks becomes a crucial problem, because the health information of an individual is highly sensitive. It must be kept private and secure. In this paper, we propose a security model to protect the transfer of medical data in hospitals using MWSNs. We propose Compressed Sensing + Encryption as a strategy to achieve low-energy secure data transmission in sensor networks.
Keywords: body sensor networks; compressed sensing; cryptography; health care; hospitals; patient monitoring; MWSN; at-home medical care;compressed sensing-encryption; distributed wireless sensor network technologies; healthcare industries; hospital wireless networks; low-energy secure data transmission; medical data sensors; medical wireless sensor network; security architecture; vital health parameter continuous monitoring; Encryption; Medical services; Sensors; Servers; Wireless sensor networks; Data Transmission; Encryption; Medical Wireless Sensor Network; Security (ID#: 15-3605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906374&isnumber=6906315
Xi Xiong; Haining Fan, "GF(2n) Bit-Parallel Squarer Using Generalised Polynomial Basis For New Class Of Irreducible Pentanomials," Electronics Letters , vol. 50, no.9, pp. 655, 657, April 24 2014. doi: 10.1049/el.2014.0006 Explicit formulae and complexities of bit-parallel GF(2n) squarers for a new class of irreducible pentanomials xn + xn-1 + xk + x + 1, where n is odd and 1 <; k <; (n - 1)/2 are presented. The squarer is based on the generalised polynomial basis of GF(2n). Its gate delay matches the best results, whereas its XOR gate complexity is n + 1, which is only about two thirds of the current best results.
Keywords: logic gates; polynomials; GF(2n) bit-parallel squarer; XOR gate; gate delay; generalised polynomial basis; irreducible pentanomial (ID#: 15-3606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809279&isnumber=6809270
Mo, Y.; Sinopoli, B., "Secure Estimation in the Presence of Integrity Attacks," Automatic Control, IEEE Transactions on, vol. PP, no. 99, pp.1, 1, 21 August 2014. doi: 10.1109/TAC.2014.2350231 We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l m=2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l < m=2), which is based on m 2l local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.
Keywords: Cost function; Security; Sensors; State estimation; Vectors (ID#: 15-3607)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881627&isnumber=4601496
Yan-Xiao Liu, "Efficient t-Cheater Identifiable (k, n) Secret-Sharing Scheme for t ⩽ [((k - 2)/2)]," Information Security, IET vol. 8, no. 1, pp.37, 41, Jan. 2014. doi: 10.1049/iet-ifs.2012.0322 In Eurocrypt 2011, Obana proposed a (k, n) secret-sharing scheme that can identify up to [((k - 2)/2)] cheaters. The number of cheaters that this scheme can identify meets its upper bound. When the number of cheaters t satisfies t ≤ [((k - 1)/3)], this scheme is extremely efficient since the size of share |Vi| can be written as |Vi| = |S|/ε, which almost meets its lower bound, where |S| denotes the size of secret and ε denotes the successful cheating probability; when the number of cheaters t is close to ((k - 2)/2)], the size of share is upper bounded by |Vi| = (n·(t + 1) · 23t-1|S|)/ε. A new (k, n) secret-sharing scheme capable of identifying [((k - 2)/2)] cheaters is presented in this study. Considering the general case that k shareholders are involved in secret reconstruction, the size of share of the proposed scheme is |Vi| = (2k - 1|S| )/ε, which is independent of the parameters t and n. On the other hand, the size of share in Obana's scheme can be rewritten as |Vi| = (n · (t + 1) · 2k - 1|S|)/ε under the same condition. With respect to the size of share, the proposed scheme is more efficient than previous one when the number of cheaters t is close to [((k - 2)/2)].
Keywords: probability; public key cryptography; (k, n) secret-sharing scheme; Obana's scheme; cheating probability; k-shakeholders; secret reconstruction (ID#: 15-3608)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687156&isnumber=6687150
Ta-Yuan Liu; Mukherjee, P.; Ulukus, S.; Shih-Chun Lin; Hong, Y.-W.P., "Secure DoF of MIMO Rayleigh block fading wiretap channels with No CSI anywhere," Communications (ICC), 2014 IEEE International Conference on, pp.1959,1964, 10-14 June 2014. doi: 10.1109/ICC.2014.6883610 We consider the block Rayleigh fading multiple-input multiple-output (MIMO) wiretap channel with no prior channel state information (CSI) available at any of the terminals. The channel gains remain constant in a coherence time of T symbols, and then change to another independent realization. The transmitter, the legitimate receiver and the eavesdropper have nt, nr and ne antennas, respectively. We determine the exact secure degrees of freedom (s.d.o.f.) of this system when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly (min(nt, nr) - ne)+(T - min(nt, nr))/T. The first term can be interpreted as the eavesdropper with ne antennas taking away ne antennas from both the transmitter and the legitimate receiver. The second term can be interpreted as a fraction of s.d.o.f. being lost due to the lack of CSI at the legitimate receiver. In particular, the fraction loss, min(nt, nr)/T, can be interpreted as the fraction of channel uses dedicated to training the legitimate receiver for it to learn its own CSI. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions.
Keywords: MIMO communication; Rayleigh channels; radio receivers; radio transmitters; receiving antennas; telecommunication security; transmitting antennas; CSI; MIMO Rayleigh block fading wiretap channels secure DoF; antennas; channel state information; degrees of freedom; discrete signalling; multiple input multiple output; receiver; s.d.o.f; transmitter; Coherence; Fading; MIMO; Receivers; Transmitting antennas; Upper bound (ID#: 15-3609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883610&isnumber=6883277
Yanbing Liu; Qingyun Liu; Ping Liu; Jianlong Tan; Li Guo, "A Factor-Searching-Based Multiple String Matching Algorithm For Intrusion Detection," Communications (ICC), 2014 IEEE International Conference on, pp.653, 658, 10-14 June 2014. doi: 10.1109/ICC.2014.6883393 Multiple string matching plays a fundamental role in network intrusion detection systems. Automata-based multiple string matching algorithms like AC, SBDM and SBOM are widely used in practice, but the huge memory usage of automata prevents them from being applied to a large-scale pattern set. Meanwhile, poor cache locality of huge automata degrades the matching speed of algorithms. Here we propose a space-efficient multiple string matching algorithm BVM, which makes use of bit-vector and succinct hash table to replace the automata used in factor-searching-based algorithms. Space complexity of the proposed algorithm is O(rm2 + ΣpϵP |p|), that is more space-efficient than the classic automata-based algorithms. Experiments on datasets including Snort, ClamAV, URL blacklist and synthetic rules show that the proposed algorithm significantly reduces memory usage and still runs at a fast matching speed. Above all, BVM costs less than 0.75% of the memory usage of AC, and is capable of matching millions of patterns efficiently.
Keywords: {automata theory; security of data; string matching; AC; ClamAV; SBDM; SBOM; Snort; URL blacklist; automata-based multiple string matching algorithms; bit-vector; factor searching-based algorithms; factor-searching-based multiple string matching algorithm; huge memory usage; matching speed; network intrusion detection systems; space complexity; space-efficient multiple string matching algorithm BVM; succinct hash table; synthetic rules; Arrays; Automata; Intrusion detection; Pattern matching; Time complexity; automata; intrusion detection; multiple string matching; space-efficient (ID#: 15-3610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883393&isnumber=6883277
Baofeng Wu; Qingfang Jin; Zhuojun Liu; Dongdai Lin, "Constructing Boolean Functions With Potentially Optimal Algebraic Immunity Based On Additive Decompositions Of Finite Fields (Extended Abstract)," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1361,1365, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875055 We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.
Keywords: Boolean functions; combinatorial mathematics; cryptography; additive decomposition; algebraic immunity; binary strings; combinatorial conjecture; cryptographic significant Boolean functions; fast algebraic attacks; finite field; generalized Tu-Deng functions; optimal algebraic degree; unbalanced functions; Additives; Boolean functions; Cryptography; Electronic mail; FAA; Information theory; Transforms (ID#: 15-3611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875055&isnumber=6874773
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Discrete and Continuous Optimization |
Discrete and continuous optimization are two mathematical approaches to problem solving. The research works cited here are primarily focused oncontinuous optimization. They appeared in 2014 between January and October.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Distributed Denial of Service Attacks (DDoS Attacks) |
Distributed Denial of Service Attacks continue to be among the most prolific forms of attack against information systems. According to the NSFOCUS DDOS Report for 2014 (ID#:14-1643) (available at: http://en.nsfocus.com/2014/SecurityReport_0320/165.html), DDOS attacks occur at the rate of 28 per hour. Research into method of prevention, detection, and response and mitigation is also substantial, as the articles presented here show.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Host-based Intrusion Detection |
The research presented here on host-based intrusion detection systems addresses semantic approaches, power grid substation protection, an architecture for modular mobile IDS, and a hypervisor based system. All works cited are from 2014.
Creech, G.; Jiankun Hu, "A Semantic Approach to Host-Based Intrusion Detection Systems Using Contiguous and Discontiguous System Call Patterns," Computers, IEEE Transactions on, vol. 63, no. 4, pp.807, 819, April 2014. doi: 10.1109/TC.2013.13 Host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate. This paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns, in an attempt to increase detection rates whilst reducing false alarm rates. The key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages, which can help understand program anomaly behaviour. Excellent results were demonstrated using a variety of decision engines, evaluating the KDD98 and UNM data sets, and a new, modern data set. The ADFA Linux data set was created as part of this research using a modern operating system and contemporary hacking methods, and is now publicly available. Furthermore, the new semantic method possesses an inherent resilience to mimicry attacks, and demonstrated a high level of portability between different operating system versions.
Keywords: high level languages; operating systems (computers); security of data;KDD98 data sets; UNM data sets; contemporary hacking methods; contiguous system call patterns; discontiguous system call patterns; false alarm rates; high-level programming languages; host-based anomaly intrusion detection system design; modern operating system; program anomaly behaviour; semantic structure; Clocks; Complexity theory; Computer architecture; Cryptography; Gaussian processes; Logic gates; Registers; ADFA-LD; Intrusion detection; anomaly detection; computer security; host-based IDS; system calls (ID#: 15-3612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6419701&isnumber=6774900
Al-Jarrah, O.; Arafat, A., "Network Intrusion Detection System Using Attack Behavior Classification," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp. 1, 6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841978 Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time.
Keywords: computer network security; neural nets; pattern classification; HIDS; NIDS; TDNN neural network structure; alert module; attack behavior classification; computer security systems; host sweep attacks; host-based intrusion detection system; network intrusion detection system; network probe attacks; packet capture engine; pattern classification; pattern recognition; port scan attacks; preprocessor; reconnaissance attacks; unauthorized accesses; IP networks; Intrusion detection; Neural networks; Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network (ID#: 15-3613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931
Junho Hong; Chen-Ching Liu; Govindarasu, M., "Integrated Anomaly Detection for Cyber Security of the Substations," Smart Grid, IEEE Transactions on, vol. 5, no. 4, pp. 1643, 1653, July 2014. doi: 10.1109/TSG.2013.2294473 Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
Keywords: computer network security; power engineering computing; power grids; power system reliability; substation automation; ADS; GOOSE; IED; SMV; catastrophic power outages; circuit breakers; cyber intrusions; generic object oriented substation event; host-based anomaly detection systems; integrated anomaly detection system; intelligent electronic devices; malicious behaviors; multicast messages; network-based anomaly detection systems; physical security; power grid; sampled measured value; severe cascading events; simultaneous anomaly detection; simultaneous intrusion detection method; substation automation testbed; substation facilities; substations; temporal anomalies; user-interfaces; Circuit breakers ;Computer security; Intrusion detection; Power grids; Substation automation; Anomaly detection; GOOSE anomaly detection; SMV anomaly detection and intrusion detection; cyber security of substations (ID#: 15-3614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786500&isnumber=6839066
Nikolai, J.; Yong Wang, "Hypervisor-Based Cloud Intrusion Detection System," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp. 989, 993, 3-6 Feb 2014. doi: 10.1109/ICCNC.2014.6785472 Shared resources are an essential part of cloud computing. Virtualization and multi-tenancy provide a number of advantages for increasing resource utilization and for providing on demand elasticity. However, these cloud features also raise many security concerns related to cloud computing resources. In this paper, we propose an architecture and approach for leveraging the virtualization technology at the core of cloud computing to perform intrusion detection security using hypervisor performance metrics. Through the use of virtual machine performance metrics gathered from hypervisors, such as packets transmitted/received, block device read/write requests, and CPU utilization, we demonstrate and verify that suspicious activities can be profiled without detailed knowledge of the operating system running within the virtual machines. The proposed hypervisor-based cloud intrusion detection system does not require additional software installed in virtual machines and has many advantages compared to host-based and network based intrusion detection systems which can complement these traditional approaches to intrusion detection.
Keywords: cloud computing; computer network security; software architecture; software metrics; virtual machines; virtualisation; CPU utilization; block device read requests; block device write requests; cloud computing resources; cloud features; hypervisor performance metrics; hypervisor-based cloud intrusion detection system; intrusion detection security; multitenancy; operating system; packet transmission; received packets; shared resource utilization; virtual machine performance metrics; virtualization; virtualization technology; Cloud computing; Computer crime; Intrusion detection; Measurement; Virtual machine monitors; Virtual machining; Cloud Computing; hypervisor; intrusion detection (ID#: 15-3615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785472&isnumber=6785290
Salman, A.; Elhajj, I.H.; Chehab, A.; Kayssi, A., "DAIDS: An Architecture for Modular Mobile IDS," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.328, 333, 13-16 May 2014. doi: 10.1109/WAINA.2014.54 The popularity of mobile devices and the enormous number of third party mobile applications in the market have naturally lead to several vulnerabilities being identified and abused. This is coupled with the immaturity of intrusion detection system (IDS) technology targeting mobile devices. In this paper we propose a modular host-based IDS framework for mobile devices that uses behavior analysis to profile applications on the Android platform. Anomaly detection can then be used to categorize malicious behavior and alert users. The proposed system accommodates different detection algorithms, and is being tested at a major telecom operator in North America. This paper highlights the architecture, findings, and lessons learned.
Keywords: Android (operating system); mobile computing; mobile radio; security of data; Android platform; DAIDS; North America; anomaly detection; behavior analysis; detection algorithms; intrusion detection system; malicious behavior; mobile devices; modular mobile IDS; profile applications; telecom operator; third party mobile applications; Androids; Databases; Detectors; Humanoid robots; Intrusion detection; Malware; Monitoring; behavior profiling; dynamic analysis; intrusion detection (ID#: 15-3616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844659&isnumber=6844560
Can, O., "Mobile Agent Based Intrusion Detection System," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1363, 1366, 23-25 April 2014. doi: 10.1109/SIU.2014.6830491 An intrusion detection system (IDS) inspects all inbound and outbound network activity and identifies suspicious patterns that may indicate a network or system attack from someone attempting to break into or compromise a system. A network based system, or NIDS, the individual packets flowing through a network are analyzed. In a host-based system, the IDS examines at the activity on each individual computer or host. IDS techniques are divided into two categories including misuse detection and anomaly detection. In recently years, Mobile Agent based technology has been used for distributed systems with having characteristic of mobility and autonomy. In this working we aimed to combine IDS with Mobile Agent concept for more scale, effective, knowledgeable system.
Keywords: mobile agents; security of data; NIDS; anomaly detection; host-based system; misuse detection; mobile agent based intrusion detection system; network activity; network-based system; suspicious patterns identification;Computers;Conferences;Informatics;Internet;Intrusion detection; Mobile agents; Signal processing;cyber attack; intrusion detection; mobile agent (ID#: 15-3617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830491&isnumber=6830164
Sridhar, S.; Govindarasu, M., "Model-Based Attack Detection and Mitigation for Automatic Generation Control ,” Smart Grid, IEEE Transactions on, vol. 5, no. 2, pp. 580, 591, March 2014. doi: 10.1109/TSG.2014.2298195 Cyber systems play a critical role in improving the efficiency and reliability of power system operation and ensuring the system remains within safe operating margins. An adversary can inflict severe damage to the underlying physical system by compromising the control and monitoring applications facilitated by the cyber layer. Protection of critical assets from electronic threats has traditionally been done through conventional cyber security measures that involve host-based and network-based security technologies. However, it has been recognized that highly skilled attacks can bypass these security mechanisms to disrupt the smooth operation of control systems. There is a growing need for cyber-attack-resilient control techniques that look beyond traditional cyber defense mechanisms to detect highly skilled attacks. In this paper, we make the following contributions. We first demonstrate the impact of data integrity attacks on Automatic Generation Control (AGC) on power system frequency and electricity market operation. We propose a general framework to the application of attack resilient control to power systems as a composition of smart attack detection and mitigation. Finally, we develop a model-based anomaly detection and attack mitigation algorithm for AGC. We evaluate the detection capability of the proposed anomaly detection algorithm through simulation studies. Our results show that the algorithm is capable of detecting scaling and ramp attacks with low false positive and negative rates. The proposed model-based mitigation algorithm is also efficient in maintaining system frequency within acceptable limits during the attack period.
Keywords: data integrity; frequency control; power system control; power system reliability; power system stability; security of data; AGC; attack mitigation algorithm; attack resilient control; automatic generation control; critical assets protection; cyber layer; cyber security measures; cyber systems; cyber-attack-resilient control techniques; data integrity attacks; electricity market operation; electronic threats; host-based security technologies; model-based anomaly detection algorithm; model-based mitigation algorithm; network-based security technologies; physical system; power system frequency; power system operation reliability; ramp attacks; scaling attacks; smart attack detection; smart attack mitigation; Automatic generation control; Electricity supply industry; Frequency measurement; Generators; Power measurement; Power system stability; Anomaly detection; automatic generation control; intrusion detection systems; kernel density estimation; supervisory control and data acquisition (ID#: 15-3618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740883&isnumber=6740878
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Human Trust |
Human behavior is complex and that complexity creates a tremendous problem for cybersecurity. The works cited here address a range of human trust issues related to behaviors, deception, enticement, sentiment and other factors difficult to isolate and quantify. All appeared in 2014.
Sousa, S.; Dias, P.; Lamas, D., "A Model for Human-Computer Trust: A Key Contribution For Leveraging Trustful Interactions," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876935 This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
Keywords: computer mediated communication; human computer interaction; computer science; computer systems; computer-mediated interactions; human-computer iteration; human-computer trust model; participation history; social phenomenon; trustful interaction leveraging; user perceptions ;user trust needs; Collaboration; Computational modeling; Computers; Context; Correlation; Educational institutions; Psychology; Collaboration; Engagement; Human-computer trust; Interaction design; Participation (ID#: 15-3619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876935&isnumber=6876860
Kounelis, I.; Baldini, G.; Neisse, R.; Steri, G.; Tallacchini, M.; Guimaraes Pereira, A., "Building Trust in the Human?Internet of Things Relationship," Technology and Society Magazine, IEEE, vol. 33, no. 4, pp.73, 80, winter 2014. doi: 10.1109/MTS.2014.2364020 The concept of the Internet of Things (IoT) was initially proposed by Kevin Ashton in 1998 [1], where it was linked to RFID technology. More recently, the initial idea has been extended to support pervasive connectivity and the integration of the digital and physical worlds [2], encompassing virtual and physical objects, including people and places.
Keywords: Internet of things; Privacy; Security; Senior citizens; Smart homes; Trust management (ID#: 15-3620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969184&isnumber=6969174
Fei Hao; Geyong Min; Man Lin; Changqing Luo; Yang, L.T., "MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.11, pp.2944, 2955, Nov. 2014. doi: 10.1109/TPDS.2013.309 Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.
Keywords: fuzzy reasoning; fuzzy set theory; graph theory; mobile computing; security of data; social networking (online);trusted computing; MSN; MobiFuzzyTrust inference mechanism; distributed public virtual social spaces; fuzzy linguistic technique; fuzzy trust inference mechanism; mobile context aware trust model; mobile devices; mobile social networks; mobile users; nonsemantical trust representation; real-world mobile dataset; social links; trust graph; trust models; trust value evaluation; Computational modeling; Context; Context modeling; Mobile communication; Mobile handsets; Pragmatics; Social network services; Mobile social networks; fuzzy inference; linguistic terms; mobile context; trust (ID#: 15-3621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6684155&isnumber=6919360
Dondio, P.; Longo, L., "Computing Trust as a Form of Presumptive Reasoning," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on , vol.2, no., pp.274,281, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.108 This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.
Keywords: trusted computing; Walton argumentation schemes; presumptive reasoning; trust computing; trust expertise; trust model; trust schemes; Cognition; Communities; Computational modeling; Context; Fuzzy logic; Measurement; Standards; fuzzy logics; online communities; trust (ID#: 15-3622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927635&isnumber=6927590
Frauenstein, E.D.; von Solms, R., "Combatting Phishing: A Holistic Human Approach," Information Security for South Africa (ISSA), 2014, pp.1,10, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950508 Phishing continues to remain a lucrative market for cyber criminals, mostly because of the vulnerable human element. Through emails and spoofed-websites, phishers exploit almost any opportunity using major events, considerable financial awards, fake warnings and the trusted reputation of established organizations, as a basis to gain their victims' trust. For many years, humans have often been referred to as the `weakest link' towards protecting information. To gain their victims' trust, phishers continue to use sophisticated looking emails and spoofed websites to trick them, and rely on their victims' lack of knowledge, lax security behavior and organizations' inadequate security measures towards protecting itself and their clients. As such, phishing security controls and vulnerabilities can arguably be classified into three main elements namely human factors (H), organizational aspects (O) and technological controls (T). All three of these elements have the common feature of human involvement and as such, security gaps are inevitable. Each element also functions as both security control and security vulnerability. A holistic framework towards combatting phishing is required whereby the human feature in all three of these elements is enhanced by means of a security education, training and awareness programme. This paper discusses the educational factors required to form part of a holistic framework, addressing the HOT elements as well as the relationships between these elements towards combatting phishing. The development of this framework uses the principles of design science to ensure that it is developed with rigor. Furthermore, this paper reports on the verification of the framework.
Keywords: computer crime; computer science education; human factors; organisational aspects; unsolicited e-mail; HOT elements; ails; awareness programme; cyber criminals; design science principles; educational factors; fake warnings; financial awards; holistic human approach; human factors; lax security behavior; organizational aspects; phishing security controls; security education; security gaps; security training; security vulnerability; spoofed-Web sites; technological controls; trusted reputation; ISO; Lead; Security; Training;COBIT;agency theory; human factors; organizational aspects; phishing; security education training and awareness; social engineering; technological controls; technology acceptance model (ID#: 15-3623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950508&isnumber=6950479
Ing-Ray Chen; Jia Guo, "Dynamic Hierarchical Trust Management of Mobile Groups and Its Application to Misbehaving Node Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.49,56, 13-16 May 2014. doi: 10.1109/AINA.2014.13 In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Keywords: emergency services; military communication; mobile computing; protocols; quality of service; telecommunication security; trusted computing ;COI dynamic hierarchical trust management protocol; COI mission-oriented mobile group management; aerial vehicles; agility enhancement; application performance maximization; communication-device-carried personnel; community-of-interest mobile groups; competence; connectivity; cooperativeness; emergency response situations; ground vehicles; heterogeneous mobile entities; heterogeneous mobile environments; honesty; intimacy; intrusion tolerance; military operation; misbehaving node population; node density; quality-of-service characters; robots; social behaviors; survivable COI management protocol; trust measurement; trust-based misbehaving node detection; Equations; Mathematical model; Mobile communication; Mobile computing; Peer-to-peer computing; Protocols; Quality of service; Trust management; adaptability; community of interest; intrusion detection; performance analysis; scalability (ID#: 15-3624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838647&isnumber=6838626
Athanasiou, G.; Fengou, M.-A.; Beis, A.; Lymberopoulos, D., "A Novel Trust Evaluation Method For Ubiquitous Healthcare Based On Cloud Computational Theory," Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pp.4503, 4506, 26-30 Aug. 2014. doi: 10.1109/EMBC.2014.6944624 The notion of trust is considered to be the cornerstone on patient-psychiatrist relationship. Thus, a trustfully background is fundamental requirement for provision of effective Ubiquitous Healthcare (UH) service. In this paper, the issue of Trust Evaluation of UH Providers when register UH environment is addressed. For that purpose a novel trust evaluation method is proposed, based on cloud theory, exploiting User Profile attributes. This theory mimics human thinking, regarding trust evaluation and captures fuzziness and randomness of this uncertain reasoning. Two case studies are investigated through simulation in MATLAB software, in order to verify the effectiveness of this novel method.
Keywords: cloud computing; health care; trusted computing; ubiquitous computing; uncertainty handling; MATLAB software; UH environment; cloud computational theory; cloud theory; trust evaluation method; ubiquitous healthcare; uncertain reasoning; user profile attributes; Conferences; Generators; MATLAB; MIMICs; Medical services; Pragmatics; TV (ID#: 15-3625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6944624&isnumber=6943513
Howser, G.; McMillin, B., "A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust," Software Security and Reliability, 2014 Eighth International Conference on, pp.225,234, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.36 Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.
Keywords: security of data; trusted computing; CPS; MSDND; Stuxnet attacks; belief manipulation; cyber physical system; cyber security; cyber-physical systems; electronic monitors; event system analysis; human operators; implicit trust; information flow disruption attacks; modal frames;modal model; multiple security domains nondeducibility; security models; trust state manipulation; Analytical models; Bismuth; Cognition; Cost accounting; Monitoring; Security; Software; Stuxnet; cyber-physical systems; doxastic logic; information flow security; nondeducibility; security models (ID#: 15-3626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895433&isnumber=6895396
Godwin, J.L.; Matthews, P., "Rapid Labelling Of SCADA Data To Extract Transparent Rules Using RIPPER," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,7, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798456 This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.
Keywords: SCADA systems; condition monitoring; failure analysis; gears; knowledge based systems; maintenance engineering; mechanical engineering computing; wind turbines; Mahalanobis distance; RIPPER rule learner; SCADA data rapid labelling; catastrophic failure; failure model; mean kappa statistic; robust prognostic condition index; supervisory control and data acquisition; wind turbine gearbox degradation; Accuracy; Gears; Indexes; Inspection; Maintenance engineering; Robustness; Wind turbines; Condition index; Data mining; prognosis; rule extraction; wind turbine SCADA data (ID#: 15-3627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798456&isnumber=6798433
Yinping Yang; Falcao, H.; Delicado, N.; Ortony, A., "Reducing Mistrust in Agent-Human Negotiations," Intelligent Systems, IEEE, vol. 29, no.2, pp.36,43, Mar.-Apr. 2014. doi: 10.1109/MIS.2013.106 Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.
Keywords: multi-agent systems; software agents; trusted computing; agent-human negotiations; computational agent; face-to-face negotiation; multiissue negotiation; online interaction; online negotiation setting; socially intelligent agent; software agent; trusting individual; Context; Economics; Educational institutions; Instruments; Intelligent systems; Joints; Software agents; agent-human negotiation; intelligent systems; online negotiation; socially intelligent agents (ID#: 15-3628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6636309&isnumber=6832878
Goldman, A.D.; Uluagac, A.S.; Copeland, J.A., "Cryptographically-Curated File System (CCFS): Secure, Inter-Operable, And Easily Implementable Information-Centric Networking," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, pp.142, 149, 8-11 Sept. 2014. doi: 10.1109/LCN.2014.6925766 Cryptographically-Curated File System (CCFS) proposed in this work supports the adoption of Information-Centric Networking. CCFS utilizes content names that span trust boundaries, verify integrity, tolerate disruption, authenticate content, and provide non-repudiation. Irrespective of the ability to reach an authoritative host, CCFS provides secure access by binding a chain of trust into the content name itself. Curators cryptographically bind content to a name, which is a path through a series of objects that map human meaningful names to cryptographically strong content identifiers. CCFS serves as a network layer for storage systems unifying currently disparate storage technologies. The power of CCFS derives from file hashes and public keys used as a name with which to retrieve content and as a method of verifying that content. We present results from our prototype implementation. Our results show that the overhead associated with CCFS is not negligible, but also is not prohibitive.
Keywords: information networks; public key cryptography; storage management; CCFS; content authentication; cryptographically strong content identifiers; cryptographically-curated file system; file hashes; information-centric networking; integrity verification; network layer; public keys; storage systems; storage technologies; trust boundaries; File systems; Google; IP networks; Prototypes; Public key; Servers; Content Centric Networking (CCN);Cryptographically Curated File System (CCFS); Delay Tolerant Networking (DTN) ;Information Centric Networks (ICN); Inter-operable Heterogeneous Storage; Name Orientated Networking (NON); Self Certifying File Systems (ID#: 15-3629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925766&isnumber=6925725
Ormrod, David, "The Coordination of Cyber and Kinetic Deception for Operational Effect: Attacking the C4ISR Interface," Military Communications Conference (MILCOM), 2014 IEEE, pp.117, 122, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.26 Modern military forces are enabled by networked command and control systems, which provide an important interface between the cyber environment, electronic sensors and decision makers. However these systems are vulnerable to cyber attack. A successful cyber attack could compromise data within the system, leading to incorrect information being utilized for decisions with potentially catastrophic results on the battlefield. Degrading the utility of a system or the trust a decision maker has in their virtual display may not be the most effective means of employing offensive cyber effects. The coordination of cyber and kinetic effects is proposed as the optimal strategy for neutralizing an adversary's C4ISR advantage. However, such an approach is an opportunity cost and resource intensive. The adversary's cyber dependence can be leveraged as a means of gaining tactical and operational advantage in combat, if a military force is sufficiently trained and prepared to attack the entire information network. This paper proposes a research approach intended to broaden the understanding of the relationship between command and control systems and the human decision maker, as an interface for both cyber and kinetic deception activity.
Keywords: Aircraft Command and control systems; Decision making; Force; Kinetic theory; Sensors; Synchronization; Command and control; combat; cyber attack; deception; risk management; trust (ID#: 15-3630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956747&isnumber=6956719
Srivastava, M., "In Sensors We Trust -- A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-3631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Sen, Shayak; Guha, Saikat; Datta, Anupam; Rajamani, Sriram K.; Tsai, Janice; Wing, Jeannette M., "Bootstrapping Privacy Compliance in Big Data Systems," Security and Privacy (SP), 2014 IEEE Symposium on, pp.327, 342, 18-21 May 2014. doi: 10.1109/SP.2014.28 With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of big data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Keywords: Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis (ID#: 15-3632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956573&isnumber=6956545
Shila, D.M.; Venugopal, V., "Design, implementation and security analysis of Hardware Trojan Threats in FPGA," Communications (ICC), 2014 IEEE International Conference on, pp.719, 724, 10-14 June 2014. doi: 10.1109/ICC.2014.6883404 Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
Keywords: field programmable gate arrays; integrated logic circuits; invasive software; FPGA testbed; HDM ;HTT detectability metric; HTT detection; ICs; RoT; Trojan taxonomy; denial of service; hardware Trojan detection technique; hardware Trojan threats; integrated circuits; missed detection probability; normalized physical parameters; optimal detection threshold; power consumption; root of trust; security analysis; sensitive information leak;s ummation of false alarm; timing variation; Encryption; Field programmable gate arrays; Hardware; Power demand; Timing; Trojan horses; Design; Hardware Trojans; Resiliency; Security (ID#: 15-3633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883404&isnumber=6883277
Dickerson, J.P.; Kagan, V.; Subrahmanian, V.S., "Using Sentiment To Detect Bots On Twitter: Are Humans More Opinionated Than Bots?," Advances in Social Networks Analysis and Mining (ASONAM), 2014 IEEE/ACM International Conference on, pp.620,627, 17-20 Aug. 2014
doi: 10.1109/ASONAM.2014.6921650 In many Twitter applications, developers collect only a limited sample of tweets and a local portion of the Twitter network. Given such Twitter applications with limited data, how can we classify Twitter users as either bots or humans? We develop a collection of network-, linguistic-, and application-oriented variables that could be used as possible features, and identify specific features that distinguish well between humans and bots. In particular, by analyzing a large dataset relating to the 2014 Indian election, we show that a number of sentimentrelated factors are key to the identification of bots, significantly increasing the Area under the ROC Curve (AUROC). The same method may be used for other applications as well.
Keywords: social networking (online); trusted computing; AUROC; Indian election; Twitter applications; Twitter network; area under the ROC curve; bot detection; sentiment-related factors; Conferences; Nominations and elections; Principal component analysis; Semantics; Syntactics; Twitter (ID#: 15-3635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921650&isnumber=6921526
Cailleux, L.; Bouabdallah, A.; Bonnin, J.-M., "A Confident Email System Based On A New Correspondence Model," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 492, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779010 Despite all the current controversies, the success of the email service is still valid. The ease of use of its various features contributed to its widespread adoption. In general, the email system provides for all its users the same set of features controlled by a single monolithic policy. Such solutions are efficient but limited because they grant no place for the concept of usage which denotes a user's intention of communication: private, professional, administrative, official, military. The ability to efficiently send emails from mobile devices creates new interesting opportunities. We argue that the context (location, time, device, operating system, access network...) of the email sender appears as a new dimension we have to take into account to complete the picture. Context is clearly orthogonal to usage because a same usage may require different features depending of the context. It is clear that there is no global policy meeting requirements of all possible usages and contexts. To address this problem, we propose to define a correspondence model which for a given usage and context allows to derive a correspondence type encapsulating the exact set of required features. With this model, it becomes possible to define an advanced email system which may cope with multiple policies instead of a single monolithic one. By allowing a user to select the exact policy coping with her needs, we argue that our approach reduces the risk-taking allowing the email system to slide from a trusted one to a confident one.
Keywords: electronic mail; human factors; security of data; trusted computing; confident email system; correspondence model; email sender context; email service; email system policy; mobile devices; trusted email system; Context; Context modeling; Electronic mail; Internet; Postal services;Protocols;Security;Email;confidence;correspondence;email security; policy; security; trust (ID#: 15-3636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779010&isnumber=6778899
Biedermann, S.; Ruppenthal, T.; Katzenbeisser, S., "Data-Centric Phishing Detection Based On Transparent Virtualization Technologies," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.215,223, 23-24 July 2014. doi: 10.1109/PST.2014.6890942 We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
Keywords: Web sites; cloud computing; computer crime; online front-ends; virtual machines; virtualisation; MitB attack; VM introspection; VMI; antiphishing solution; cloud; color-based fingerprint extraction; color-based fingerprint filtering; color-based fingerprint scaling; component isolation; data-centric phishing detection; human perceptual similarity; man-in-the-browser attack; phishing attacks; spoofed Web pages; transparent virtualization technologies; virtual machines; Browsers; Computer architecture; Data mining; Detectors; Image color analysis; Malware; Web pages (ID#: 15-3637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890942&isnumber=6890911
Bian Yang; Huiguang Chu; Guoqiang Li; Petrovic, S.; Busch, C., "Cloud Password Manager Using Privacy-Preserved Biometrics," Cloud Engineering (IC2E), 2014 IEEE International Conference on, pp.505, 509, 11-14 March 2014. doi: 10.1109/IC2E.2014.91 Using one password for all web services is not secure because the leakage of the password compromises all the web services accounts, while using independent passwords for different web services is inconvenient for the identity claimant to memorize. A password manager is used to address this security-convenience dilemma by storing and retrieving multiple existing passwords using one master password. On the other hand, a password manager liberates human brain by enabling people to generate strong passwords without worry about memorizing them. While a password manager provides a convenient and secure way to managing multiple passwords, it centralizes the passwords storage and shifts the risk of passwords leakage from distributed service providers to a software or token authenticated by a single master password. Concerned about this one master password based security, biometrics could be used as a second factor for authentication by verifying the ownership of the master password. However, biometrics based authentication is more privacy concerned than a non-biometric password manager. In this paper we propose a cloud password manager scheme exploiting privacy enhanced biometrics, which achieves both security and convenience in a privacy-enhanced way. The proposed password manager scheme relies on a cloud service to synchronize all local password manager clients in an encrypted form, which is efficient to deploy the updates and secure against untrusted cloud service providers.
Keywords: Web services; authorisation; biometrics (access control);cloud computing; data privacy; trusted computing; Web service account; biometrics based authentication; cloud password manager; distributed service providers; local password manager client synchronization; master password based security; nonbiometric password manager; password leakage risk; password storage; privacy enhanced biometrics; privacy-preserved biometrics; token authentication; untrusted cloud service providers; Authentication; Biometrics (access control);Cryptography; Privacy; Synchronization; Web services; biometrics; cloud; password manager; privacy preservation; security (ID#: 15-3638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903519&isnumber=6903436
Skopik, F.; Settanni, G.; Fiedler, R.; Friedberg, I., "Semi-Synthetic Data Set Generation For Security Software Evaluation," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.156, 163, 23-24 July 2014. doi: 10.1109/PST.2014.6890935 Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. This kind of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.
Keywords: data handling; security of data; ICT security systems; ICT systems; heuristic detection engines; information and communication technology systems; intrusion detection and prevention systems; security software evaluation; self-learning intrusion detection systems; semisynthetic data set generation; virus infestation; Complexity theory; Data models; Databases; Intrusion detection; Testing; Virtual machining; anomaly detection evaluation; scalable system behavior model; synthetic data set generation (ID#: 15-3639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890935&isnumber=6890911
Montague, E.; Jie Xu; Chiou, E., "Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance Between Active and Passive Users in Technology-Mediated Collaborative Encounters," Human-Machine Systems, IEEE Transactions on, vol. 44, no. 5, pp.614, 624, Oct. 2014. doi: 10.1109/THMS.2014.2325859 The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typically does not have control over the technology being used. An experiment was conducted with 48 participants who worked in two-person groups in a multitask environment under varied task and technology conditions. Indicators of PC were measured from participants' cardiovascular and electrodermal activities. The relationship between these PC indicators and collaboration outcomes, such as performance and subjective perception of the system, was explored. Results indicate that PC is related to group performance after controlling for task/technology conditions. PC is also correlated with shared perceptions of trust in technology among group members. PC is a useful tool for monitoring group processes and, thus, can be valuable for the design of collaborative systems. This study has implications for understanding effective collaboration.
Keywords: groupware; multiprogramming; physiology; trusted computing; multitask environment; multiuser technological environment; physiological compliance; shared experiences ;technology-mediated collaborative encounters; trust; Atmospheric measurements; Biomedical monitoring; Joints; Monitoring; Optical wavelength conversion; Particle measurements; Reliability; Group performance; multiagent systems; passive user; physiological compliance (PC); trust in technology (ID#: 15-3640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837486&isnumber=6898062
Algarni, A.; Yue Xu; Chan, T., "Social Engineering in Social Networking Sites: The Art of Impersonation," Services Computing (SCC), 2014 IEEE International Conference on, pp.797,804, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.108 Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Keywords: computer crime; fraud; social aspects of automation; social networking (online);Facebook; SNS; attacker; deceptive people; financial abuse; fraudulent people; grounded theory method; human behaviors complexity ;identity theft; impersonation; large information base; phishing; physical crime; security; sexual abuse; social engineering traps; social engineering victimization; social engineering tactics; social networking sites; threats; user susceptibility;Encoding;Facebook;Interviews;Organizations;Receivers;Security; impersonation; information security management; social engineering; social networking sites; source credibility; trust management (ID#: 15-3641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930610&isnumber=6930500
Riveiro, M.; Lebram, M.; Warston, H., "On Visualizing Threat Evaluation Configuration Processes: A Design Proposal," Information Fusion (FUSION), 2014 17th International Conference on, pp.1,8, 7-10 July 2014 Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.
Keywords: estimation theory; expert systems; military computing; design process model; design proposal; expert operators; military expert system designer; proof-of-concept prototype; relevant parameter; threat estimation; threat evaluation configuration process; Data models; Estimation; Guidelines; Human computer interaction; Proposals; Prototypes; Weapons; decision-making; design; high-level information fusion; threat evaluation; transparency; visualization (ID#: 15-3642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916152&isnumber=6915967
El Masri, A.; Wechsler, H.; Likarish, P.; Kang, B.B., "Identifying Users With Application-Specific Command Streams," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.232,238, 23-24 July 2014. doi: 10.1109/PST.2014.6890944 This paper proposes and describes an active authentication model based on user profiles built from user-issued commands when interacting with GUI-based application. Previous behavioral models derived from user issued commands were limited to analyzing the user's interaction with the *Nix (Linux or Unix) command shell program. Human-computer interaction (HCI) research has explored the idea of building users profiles based on their behavioral patterns when interacting with such graphical interfaces. It did so by analyzing the user's keystroke and/or mouse dynamics. However, none had explored the idea of creating profiles by capturing users' usage characteristics when interacting with a specific application beyond how a user strikes the keyboard or moves the mouse across the screen. We obtain and utilize a dataset of user command streams collected from working with Microsoft (MS) Word to serve as a test bed. User profiles are first built using MS Word commands and identification takes place using machine learning algorithms. Best performance in terms of both accuracy and Area under the Curve (AUC) for Receiver Operating Characteristic (ROC) curve is reported using Random Forests (RF) and AdaBoost with random forests.
Keywords: biometrics (access control); human computer interaction; learning (artificial intelligence); message authentication; sensitivity analysis; AUC; AdaBoost; GUI-based application; MS Word commands; Microsoft; RF; ROC curve; active authentication model; application-specific command streams; area under the curve; human-computer interaction; machine learning algorithms; random forests; receiver operating characteristic; user command streams; user identification; user profiles; user-issued commands; Authentication; Biometrics (access control); Classification algorithms; Hidden Markov models; Keyboards; Mice; Radio frequency; Active Authentication; Behavioral biometrics; Intrusion Detection; Machine Learning (ID#: 15-3643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890944&isnumber=6890911
Saoud, Z.; Faci, N.; Maamar, Z.; Benslimane, D., "A Fuzzy Clustering-Based Credibility Model for Trust Assessment in a Service-Oriented Architecture," WETICE Conference (WETICE), 2014 IEEE 23rd International, pp.56,61, 23-25 June 2014. doi: 10.1109/WETICE.2014.35 This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.
Keywords: Web services; customer satisfaction; fuzzy set theory; human computer interaction; pattern clustering; service-oriented architecture; trusted computing; Web services; consumer credibility; consumer ratings; credibility model; fuzzy clustering; majority rating; service-oriented architecture; trust assessment; Clustering algorithms; Communities; Computational modeling; Equations; Robustness; Social network services; Web services; Credibility; Trust; Web Service (ID#: 15-3644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927023&isnumber=6926989
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Input-Output (I/O) Systems Security |
Management of I/O devices is a critical part of the operating system. Entire I/O subsystems are devoted to its operation. These subsystems contend both with the movement towards standard interfaces for a wide range of devices to makes it easier to add newly developed devices to existing systems, and the development of entirely new types of devices for which existing standard interfaces can be difficult to apply. Typically, when accessing files, a security check is performed when the file is created or opened. The security check is typically not done again unless the file is closed and reopened. If an opened file is passed to an untrusted caller, the security system can, but is not required to prevent the caller from accessing the file. Research into I/O security addresses the need to provide adequate security economically and to scale.
Research cited here were published or presented in the first half of 2014. I/O security topics addressed in these works include avionic systems, virtual machines, device replication, RAID arrays, hypervisor design, and cloud storage.
Muller, K.; Sigl, G.; Triquet, B.; Paulitsch, M., "On MILS I/O Sharing Targeting Avionic Systems," Dependable Computing Conference (EDCC), 2014 Tenth European , vol., no., pp.182,193, 13-16 May 2014. doi: 10.1109/EDCC.2014.35 This paper discusses strategies for I/O sharing in Multiple Independent Levels of Security (MILS) systems mostly deployed in the special environment of avionic systems. MILS system designs are promising approaches for handling the increasing complexity of functionally integrated systems, where multiple applications run concurrently on the same hardware platform. Such integrated systems, also known as Integrated Modular Avionics (IMA) in the aviation industry, require communication to remote systems located outside of the hosting hardware platform. One possible solution is to provide each partition, the isolated runtime environment of an application, a direct interface to the communication's hardware controller. Nevertheless, this approach requires a special design of the hardware itself. This paper discusses efficient system architectures for I/O sharing in the environment of high-criticality embedded systems and the exemplary analysis of Free scale's proprietary Data Path Acceleration Architecture (DPAA) with respect to generic hardware requirements. Based on this analysis we also discuss the development of possible architectures matching with the MILS approach. Even though the analysis focuses on avionics it is equally applicable to automotive architectures such as Auto SAR.
Keywords: aerospace computing; avionics; embedded systems; security of data; DPAA; IMA; MILS I/O sharing; MILS system designs; autoSAR; aviation industry; avionic systems; communication hardware controller; free scale proprietary data path acceleration architecture; hardware platform; high-criticality embedded systems; integrated modular avionics; multiple independent levels of security system; system architectures; Aerospace electronics; Computer architecture; Hardware; Portals; Runtime; Security; Software (ID#: 15-3724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821104&isnumber=6821069
Aiash, M.; Mapp, G.; Gemikonakli, O., "Secure Live Virtual Machines Migration: Issues and Solutions," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.160,165, 13-16 May 2014. doi: 10.1109/WAINA.2014.35 In recent years, there has been a huge trend towards running network intensive applications, such as Internet servers and Cloud-based service in virtual environment, where multiple virtual machines (VMs) running on the same machine share the machine's physical and network resources. In such environment, the virtual machine monitor (VMM) virtualizes the machine's resources in terms of CPU, memory, storage, network and I/O devices to allow multiple operating systems running in different VMs to operate and access the network concurrently. A key feature of virtualization is live migration (LM) that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Live migration facilitates workload balancing, fault tolerance, online system maintenance, consolidation of virtual machines etc. However, live migration is still in an early stage of implementation and its security is yet to be evaluated. The security concern of live migration is a major factor for its adoption by the IT industry. Therefore, this paper uses the X.805 security standard to investigate attacks on live virtual machine migration. The analysis highlights the main source of threats and suggests approaches to tackle them. The paper also surveys and compares different proposals in the literature to secure the live migration.
Keywords: cloud computing; security of data; virtual machines; Internet server; VMM; X.805 security standard; cloud-based service; fault tolerance; live virtual machine migration; online system maintenance; virtual machine monitor; workload balancing; Authentication; Hardware; Servers; Virtual machine monitors; Virtual machining; Virtualization (ID#: 15-3725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844631&isnumber=6844560
Ravindran, K.; Rabby, M.; Adiththan, A, "Model-based Control Of Device Replication For Trusted Data Collection," Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2014 Workshop on , vol., no., pp.1,6, 14-14 April 2014. doi: 10.1109/MSCPES.2014.6842399 Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
Keywords: quality of service; security of data; trusted computing; QoS; algorithm correctness; bandwidth attack; black-box; data corruptions; device replication autonomic management; malicious devices; security attack; trusted data collection; voting protocol mechanisms; Bandwidth; Batteries; Data collection; Delays; Frequency modulation; Protocols; Quality of service; Adaptive Fault-tolerance; Attacker Modeling; Hierarchical Control; Sensor Replication; Situational Assessment (ID#: 15-3726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842399&isnumber=6842390
Smith, S.; Woodward, C.; Liang Min; Chaoyang Jing; Del Rosso, A, "On-line Transient Stability Analysis Using High Performance Computing," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816438 In this paper, parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using Message Passing Interface (MPI) based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the Input/Output (I/O) bottleneck are explored, and findings indicate that architecting a machine with a larger local disk and maintaining a local file system significantly improve the scaling results. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library.
Keywords: large-scale systems; message passing; power engineering computing; power system transient stability; real-time systems; EPRI extended transient midterm simulation program; ETMSP; MPI; SuperLU_MT library; high performance computing machine; input-output bottleneck; large-scale contingency analysis; local disk;local file system; message passing interface; on-line transient stability analysis; real-time environment; sparse linear solve; system dynamics phenomena; ultrafast transient stability analysis; Computational modeling; File systems; High performance computing; Power system stability; Stability analysis; Transient analysis; Dynamic security assessment; control center; high performance computing; parallelization; real-time simulation; transient stability (ID#: 15-3727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816438&isnumber=6816367
Shropshire, J., "Analysis of Monolithic and Microkernel Architectures: Towards Secure Hypervisor Design," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5008,5017, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.615 This research focuses on hyper visor security from holistic perspective. It centers on hyper visor architecture - the organization of the various subsystems which collectively compromise a virtualization platform. It holds that the path to a secure hyper visor begins with a big-picture focus on architecture. Unfortunately, little research has been conducted with this perspective. This study investigates the impact of monolithic and micro kernel hyper visor architectures on the size and scope of the attack surface. Six architectural features are compared: management API, monitoring interface, hyper calls, interrupts, networking, and I/O. These subsystems are core hyper visor components which could be used as attack vectors. Specific examples and three leading hyper visor platforms are referenced (ESXi for monolithic architecture; Xen and Hyper-V for micro architecture). The results describe the relative strengths and vulnerabilities of both types of architectures. It is concluded that neither design is more secure, since both incorporate security tradeoffs in core processes.
Keywords: application program interfaces; security of data; virtualization; ESXi; Hyper-V; Xen; attack surface; hyper calls; hyper visor security; management API; micro architecture; micro kernel hyper visor architectures; microkernel architectures; monitoring interface; monolithic architectures; monolithic hyper visor architectures; networking; secure hyper visor design; security tradeoffs; virtualization platform; Computer architecture; Hardware; Kernel; Monitoring; Security; Virtual machine monitors; Virtual machining; cloud computing; hypervisor security; microkernel architecture; monolithic architecture (ID#: 15-3728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759218&isnumber=6758592
Youngjung Ahn; Yongsuk Lee; Jin-Young Choi; Gyungho Lee; Dongkyun Ahn, "Monitoring Translation Lookahead Buffers to Detect Code Injection Attacks," Computer , vol.47, no.7, pp.66,72, July 2014. doi: 10.1109/MC.2013.228 By identifying memory pages that external I/O operations have modified, a proposed scheme blocks malicious injected code activation, accurately distinguishing an attack from legitimate code injection with negligible performance impact and no changes to the user application.
Keywords: buffer storage; computer crime; system monitoring; blocks malicious injected code activation; code injection attack detection; external I/O operations; legitimate code injection attack; memory pages identification; translation lookahead buffers monitoring; Decision support systems; Handheld computers; Code injection; TLB; data execution prevention; hackers; invasive software; security (ID#: 15-3729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6560060&isnumber=6861869
Mingqiang Li; Lee, P.P.C., "Toward I/O-Efficient Protection Against Silent Data Corruptions In RAID Arrays," Mass Storage Systems and Technologies (MSST), 2014 30th Symposium on , vol., no., pp.1,12, 2-6 June 2014. doi: 10.1109/MSST.2014.6855548 Although RAID is a well-known technique to protect data against disk errors, it is vulnerable to silent data corruptions that cannot be detected by disk drives. Existing integrity protection schemes designed for RAID arrays often introduce high I/O overhead. Our key insight is that by properly designing an integrity protection scheme that adapts to the read/write characteristics of storage workloads, the I/O overhead can be significantly mitigated. In view of this, this paper presents a systematic study on I/O-efficient integrity protection against silent data corruptions in RAID arrays. We formalize an integrity checking model, and justify that a large proportion of disk reads can be checked with simpler and more I/O-efficient integrity checking mechanisms. Based on this integrity checking model, we construct two integrity protection schemes that provide complementary performance advantages for storage workloads with different user write sizes. We further propose a quantitative method for choosing between the two schemes in real-world scenarios. Our trace-driven simulation results show that with the appropriate integrity protection scheme, we can reduce the I/O overhead to below 15%.
Keywords: RAID; data integrity; input-output programs; security of data; IO-efficient integrity checking mechanisms; IO-efficient protection; RAID arrays; disk drives; disk errors; integrity checking model; integrity protection schemes; read-write characteristics; silent data corruptions; storage workloads; trace-driven simulation; user write sizes; Arrays; Data models; Disk drives; Redundancy; Systematics; Taxonomy; I/O overhead; RAID; integrity protection schemes; silent data corruptions (ID#: 15-3730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855548&isnumber=6855532
Mianxiong Dong; He Lit; Ota, K.; Haojin Zhu, "HVSTO: Efficient Privacy Preserving Hybrid Storage In Cloud Data Center," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.529,534, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849287 In cloud data center, shared storage with good management is a main structure used for the storage of virtual machines (VM). In this paper, we proposed Hybrid VM storage (HVSTO), a privacy preserving shared storage system designed for the virtual machine storage in large-scale cloud data center. Unlike traditional shared storage, HVSTO adopts a distributed structure to preserve privacy of virtual machines, which are a threat in traditional centralized structure. To improve the performance of I/O latency in this distributed structure, we use a hybrid system to combine solid state disk and distributed storage. From the evaluation of our demonstration system, HVSTO provides a scalable and sufficient throughput for the platform as a service infrastructure.
Keywords: cloud computing; computer centers; data privacy; virtual machines; virtualization; HVSTO; I/O latency performance improvement; distributed storage; distributed structure; hybrid VM storage; large-scale cloud data center; privacy preserving hybrid storage; privacy preserving shared storage system; service infrastructure; solid state disk; virtual machine storage; Conferences; Data privacy; Indexes; Security; Servers; Virtual machining; Virtualization} (ID#: 15-3730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849287&isnumber=6849127
Chiang, R.; Rajasekaran, S.; Zhang, N.; Huang, H., "Swiper: Exploiting Virtual Machine Vulnerability in Third-Party Clouds with Competition for I/O Resources," Parallel and Distributed Systems, IEEE Transactions on, vol.PP, no.99, pp.1, 1, June 2014. doi: 10.1109/TPDS.2014.2325564 The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads - i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth - which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.
Keywords: Cloud computing; Delays; IP networks; Security; Synchronization; Throughput; Virtualization (ID#: 15-3731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824231&isnumber=4359390
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Integrated Security |
Cybersecurity often has spent the past two decades largely as a “bolt on” product added as an afterthought. To get to composability, built-in, integrated security will be a key factor. The research cited here addresses issues in integrated security technologies and were presented in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Integrity of Outsourced Databases |
The growth of distributed storage systems such as the Cloud has produced novel security problems. The works cited here address untrusted servers, generic trusted data, trust extension on commodity computers, defense against frequency-based attacks in wireless networks, and other topics. These articles were presented or published in the first half of 2014.
Matteo Maffei, Giulio Malavolta, Manuel Reinert, Dominique Schröder, “Brief Announcement: Towards Security And Privacy For Outsourced Data In The Multi-Party Setting,” Proceedings of the 2014 ACM Symposium On Principles Of Distributed Computing, July 2014, Pages 144-146. doi>10.1145/2611462.2611508 Cloud storage has rapidly acquired popularity among users, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. This technology, however, puts user data in the direct control of cloud service providers, which raises increasing security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. We present GORAM, a cryptographic system that protects the secrecy and integrity of the data outsourced to an untrusted server and guarantees the anonymity and unlinkability of consecutive accesses to such data. GORAM allows the database owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. Technically, GORAM builds on a combination of ORAM to conceal data accesses, attribute-based encryption to rule the access to outsourced data, and zero-knowledge proofs to prove read and write permissions in a privacy-preserving manner. We implemented GORAM and conducted an experimental evaluation to demonstrate its feasibility.
Keywords: GORAM, ORAM, cloud storage, oblivious ram, privacy-enhancing technologies (ID#: 15-3732)
URL: http://dl.acm.org/citation.cfm?id=2611462.2611508&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2611462.2611508
Andrew Miller, Michael Hicks, Jonathan Katz, Elaine Shi, “Authenticated Data Structures, Generically,” Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2014, Pages 411-423. doi>10.1145/2535838.2535851 An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations. This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.
Keywords: authenticated data structures, cryptography, programming languages, security (ID#: 15-3733)
URL: http://dl.acm.org/citation.cfm?id=2535838.2535851&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2578855.2535851
Lifei Wei, Haojin Zhu, Zhenfu Cao, Xiaolei Dong, Weiwei Jia, Yunlu Chen, Athanasios V. Vasilakos, “Security and Privacy For Storage And Computation In Cloud Computing,” Information Sciences: an International Journal, Volume 258, February, 2014, Pages 371-386. doi>10.1016/j.ins.2013.04.028 Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.
Keywords: Batch verification, Cloud computing, Designated verifier signature, Privacy-cheating discouragement, Secure computation auditing, Secure storage (ID#: 15-3734)
URL: http://dl.acm.org/citation.cfm?id=2563733.2564107&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://dx.doi.org/10.1016/j.ins.2013.04.028
Bryan Jeffery Parno, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers, ACM Press, New York, NY, 2014. ISBN: 978-1-62705-477-5 doi>10.1145/2611399 From the preface: As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981]. In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue thatwecan resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. We support this premise over the course of the following chapters. •Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together. •Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. •Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult. •On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today. •Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network. •Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers. (ID#: 15-3735)
URL: http://dl.acm.org/citation.cfm?id=2611399&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526
Hongbo Liu, Hui Wang, Yingying Chen, Dayong Jia, “Defending against Frequency-Based Attacks on Distributed Data Storage in Wireless Networks,” ACM Transactions on Sensor Networks (TOSN), Volume 10 Issue 3, April 2014, Article No. 49. doi>10.1145/2594774As wireless networks become more pervasive, the amount of the wireless data is rapidly increasing. One of the biggest challenges of wide adoption of distributed data storage is how to store these data securely. In this work, we study the frequency-based attack, a type of attack that is different from previously well-studied ones, that exploits additional adversary knowledge of domain values and/or their exact/approximate frequencies to crack the encrypted data. To cope with frequency-based attacks, the straightforward 1-to-1 substitution encryption functions are not sufficient. We propose a data encryption strategy based on 1-to-n substitution via dividing and emulating techniques to defend against the frequency-based attack, while enabling efficient query evaluation over encrypted data. We further develop two frameworks, incremental collection and clustered collection, which are used to defend against the global frequency-based attack when the knowledge of the global frequency in the network is not available. Built upon our basic encryption schemes, we derive two mechanisms, direct emulating and dual encryption, to handle updates on the data storage for energy-constrained sensor nodes and wireless devices. Our preliminary experiments with sensor nodes and extensive simulation results show that our data encryption strategy can achieve high security guarantee with low overhead.
Keywords: Frequency-based attack, secure distributed data storage, wireless networks (ID#: 15-3736)
URL: http://dl.acm.org/citation.cfm?id=2619982.2594774&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2594774
She-I Chang, David C. Yen, I-Cheng Chang, Derek Jan, “Internal Control Framework For A Compliant ERP System,” Information and Management, Volume 51 Issue 2, March, 2014, Pages 187-205. doi>10.1016/j.im.2013.11.002 After the occurrence of numerous worldwide financial scandals, the importance of related issues such as internal control and information security has greatly increased. This study develops an internal control framework that can be applied within an enterprise resource planning (ERP) system. A literature review is first conducted to examine the necessary forms of internal control in information technology (IT) systems. The control criteria for the establishment of the internal control framework are then constructed. A case study is conducted to verify the feasibility of the established framework. This study proposes a 12-dimensional framework with 37 control items aimed at helping auditors perform effective audits by inspecting essential internal control points in ERP systems. The proposed framework allows companies to enhance IT audit efficiency and mitigates control risk. Moreover, companies that refer to this framework and consider the limitations of their own IT management can establish a more robust IT management mechanism.
Keywords: Enterprise resource planning, IT control, Internal control framework (ID#: 15-3737)
URL: http://dl.acm.org/citation.cfm?id=2592290.2592340&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://dx.doi.org/10.1016/j.im.2013.11.002
Miyoung Jang; Min Yoon; Jae-Woo Chang, "A privacy-aware query authentication index for database outsourcing," Big Data and Smart Computing (BIGCOMP), 2014 International Conference on , vol., no., pp.72,76, 15-17 Jan. 2014. doi: 10.1109/BIGCOMP.2014.6741410 Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
Keywords: cloud computing; data integrity; data privacy; database indexing; digital signatures; outsourcing; query processing; visual databases; bucket-based data authentication methods; cloud computing; cost reduction ;data confidentiality; data maintenance; data security; data storage; database management system; database outsourcing; group signature; periodic function-based data grouping scheme; privacy-aware query authentication index; query result integrity; range query answering; service provider; spatial data distribution; spatial database; unsophisticated data grouping strategy; verification object transmission overhead; Authentication; Encryption; Indexes; Query processing; Spatial databases; Data authentication index; Database outsourcing; Encrypted database; Query result integrity (ID#: 15-3738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6741410&isnumber=6741395
Omote, K.; Thao, T.P., "A New Efficient and Secure POR Scheme Based on Network Coding," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.98,105, 13-16 May 2014. doi: 10.1109/AINA.2014.17 Information is increasing quickly, database owners have tendency to outsource their data to an external service provider called Cloud Computing. Using Cloud, clients can remotely store their data without burden of local data storage and maintenance. However, such service provider is untrusted, therefore there are some challenges in data security: integrity, availability and confidentiality. Since integrity and availability are prerequisite conditions of the existence of a system, we mainly focus on them rather than confidentiality. To ensure integrity and availability, researchers have proposed network coding-based POR (Proof of Retrievability) schemes that enable the servers to demonstrate whether the data is retrievable or not. However, most of network coding-based POR schemes are inefficient in data checking and also cannot prevent a common attack in POR: small corruption attack. In this paper, we propose a new network coding-based POR scheme using dispersal code in order to reduce cost in checking phase and also to prevent small corruption attack.
Keywords: cloud computing; data communication; network coding; security of data; cloud computing; corruption attack; corruption attack prevention; ost reduction; data availability; data checking;data confidentiality; data integrity; data security; dispersal code; efficient POR scheme; local data storage; maintenance; network coding-based POR; proof of retrievability; secure POR scheme; Availability; Decoding; Encoding; Maintenance engineering; Network coding; Servers; Silicon; data availability; data integrity; network coding; proof of retrievability (ID#: 15-3739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838653&isnumber=6838626
Yinan Jing; Ling Hu; Wei-Shinn Ku; Shahabi, C., "Authentication of k Nearest Neighbor Query on Road Networks," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.6, pp.1494,1506, June 2014. doi: 10.1109/TKDE.2013.174 Outsourcing spatial databases to the cloud provides an economical and flexible way for data owners to deliver spatial data to users of location-based services. However, in the database outsourcing paradigm, the third-party service provider is not always trustworthy, therefore, ensuring spatial query integrity is critical. In this paper, we propose an efficient road network k-nearest-neighbor query verification technique which utilizes the network Voronoi diagram and neighbors to prove the integrity of query results. Unlike previous work that verifies k-nearest-neighbor results in the Euclidean space, our approach needs to verify both the distances and the shortest paths from the query point to its kNN results on the road network. We evaluate our approach on real-world road networks together with both real and synthetic points of interest datasets. Our experiments run on Google Android mobile devices which communicate with the service provider through wireless connections. The experiment results show that our approach leads to compact verification objects (VO) and the verification algorithm on mobile devices is efficient, especially for queries with low selectivity.
Keywords: computational geometry; outsourcing; query processing; smart phones; visual databases; Euclidean space; Google Android mobile devices; Voronoi diagram; database outsourcing paradigm; k nearest neighbor query; location-based services; road network k-nearest-neighbor query verification technique; spatial databases; spatial query integrity; third-party service provider; Artificial neural networks; Authentication; Generators; Outsourcing; Roads; Spatial databases; Spatial database outsourcing; location-based services; query authentication; road networks (ID#: 15-3740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6658750&isnumber=6824283
Wang, H., "Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage," Services Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, March 2014. doi: 10.1109/TSC.2014.1 Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier’s cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.
Keywords: Cloud computing; Computational modeling; Distributed databases; Indexes; Protocols; Security; Servers (ID#: 15-3741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762896&isnumber=4629387
Jinguang Han; Susilo, W.; Yi Mu, "Identity-Based Secure Distributed Data Storage Schemes," Computers, IEEE Transactions on , vol.63, no.4, pp.941,953, April 2014. doi: 10.1109/TC.2013.26 Secure distributed data storage can shift the burden of maintaining a large number of files from the owner to proxy servers. Proxy servers can convert encrypted files for the owner to encrypted files for the receiver without the necessity of knowing the content of the original files. In practice, the original files will be removed by the owner for the sake of space efficiency. Hence, the issues on confidentiality and integrity of the outsourced data must be addressed carefully. In this paper, we propose two identity-based secure distributed data storage (IBSDDS) schemes. Our schemes can capture the following properties: (1) The file owner can decide the access permission independently without the help of the private key generator (PKG); (2) For one query, a receiver can only access one file, instead of all files of the owner; (3) Our schemes are secure against the collusion attacks, namely even if the receiver can compromise the proxy servers, he cannot obtain the owner's secret key. Although the first scheme is only secure against the chosen plaintext attacks (CPA), the second scheme is secure against the chosen ciphertext attacks (CCA). To the best of our knowledge, it is the first IBSDDS schemes where an access permission is made by the owner for an exact file and collusion attacks can be protected in the standard model.
Keywords: authorization; data integrity; distributed databases; file servers; private key cryptography; storage management; CCA; CPA; IBSDDS scheme; PKG; access permission; chosen ciphertext attack; chosen plaintext attack; collusion attacks; encrypted files conversion; file access; file maintenance; identity-based secure distributed data storage scheme; outsourced data confidentiality; outsourced data integrity; private key generator; proxy server; receiver; space efficiency; Educational institutions; Encryption; Memory; Receivers; Servers; Distributed data storage; access control; identity-based system; security (ID#: 15-3742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6463376&isnumber=6774900
Al-Anzi, F.S.; Salman, AA; Jacob, N.K.; Soni, J., "Towards Robust, Scalable And Secure Network Storage In Cloud Computing," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , vol., no., pp.51,55, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821656 The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
Keywords: cloud computing; data integrity; data privacy; security of data; storage management; ubiquitous computing; virtualisation; Internet; adaptive security; authentication; built in security; client overhead; cloud computing environment; cloud service provider; compliance; confidentiality; data security; data sharing; data storage; information centric security; integrity; middleware; network storage architecture; networking; on-demand access; operating system; pay per use model; privacy; proactive security; remote application access; remote service access; robust scalable secure network storage; server; service availability; service outsourcing; ubiquitous next generation information technology architecture virtualization; Availability; Cloud computing; Computer architecture; Data security; Distributed databases; Servers; Cloud Computing; Data Storage; Data security; RAID (ID#: 15-3743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821656&isnumber=6821645
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Intrusion Tolerance (2014) |
This bibliography is a 2014 year in review collection. Intrusion tolerance refers to a fault-tolerant design approach to defending communications, computer and other information systems against malicious attack. Rather than detecting all anomalies, tolerant systems only identify those intrusions which lead to security failures. This collection cites publications of interest addressing new methods of building secure fault tolerant systems.
Wei Min; Keecheon Kim, "Intrusion Tolerance Mechanisms Using Redundant Nodes for Wireless Sensor Networks," Information Networking (ICOIN), 2014 International Conference on, pp. 131, 135, 10-12 February 2014. doi: 10.1109/ICOIN.2014.6799679 Wireless sensor networks extend people's ability to explore, monitor, and control the physical world. Wireless sensor networks are susceptible to certain types of attacks because they are deployed in open and unprotected environments. Novel intrusion tolerance architecture is proposed in this paper. An expert intrusion detection analysis system and an all-channel analyzer are introduced. A proposed intrusion tolerance scheme is implemented. Results show that this scheme can detect data traffic and re-route it to a redundant node in the wireless network, prolong the lifetime of the network, and isolate malicious traffic introduced through compromised nodes or illegal intrusions.
Keywords: data communication; telecommunication channels; telecommunication network routing; telecommunication security; telecommunication traffic; wireless sensor networks; all-channel analyzer; data traffic detection; expert intrusion detection analysis system; intrusion tolerance architecture; intrusion tolerance mechanisms; re-route detection; redundant node; redundant nodes; wireless sensor networks; Intrusion detection; Monitoring ;Protocols; Routing; Wireless networks; Wireless sensor networks; Wireless Sensor networks; intrusion tolerance; security (ID#: 15-3645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799679&isnumber=6799467
Hemalatha, A.; Venkatesh, R., "Redundancy Management In Heterogeneous Wireless Sensor networks," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.1849,1853, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6950165 A Wireless sensor network is a special type of Ad Hoc network, composed of a large number of sensor nodes spread over a wide geographical area. Each sensor node has the wireless communication capability and sufficient intelligence for making signal processing and dissemination of data from the collecting center .In this paper deals about redundancy management for improving network efficiency and query reliability in heterogeneous wireless sensor networks. The proposed scheme deals about finding a reliable path by using redundancy management algorithm and detection of unreliable nodes by discarding the path. The redundancy management algorithm finds the reliable path based on redundancy level, average distance between a source node and destination node and analyzes the redundancy level as the path and source redundancy. For finding the path from source CH to processing center we propose intrusion tolerance in the presence of unreliable nodes. Finally we applied our analyzed result to redundancy management algorithm to find the reliable path in which the network efficiency and Query success probability will be improved.
Keywords: ad hoc networks; probability queueing theory; redundancy; signal processing; telecommunication network reliability; wireless sensor networks; ad hoc network; destination node; geographical area; heterogeneous wireless sensor networks; intrusion tolerance; network efficiency; query reliability; query success probability; redundancy management algorithm; signal dissemination; signal processing; source node; unreliable nodes detection; Ad hoc networks; Indexes; Quality of service; Redundancy; Tin; Wireless sensor networks; intrusion tolerance; multipath routing; reliability; wireless sensor network (ID#: 15-3646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950165&isnumber=6949766
Ing-Ray Chen; Jia Guo, "Dynamic Hierarchical Trust Management of Mobile Groups and Its Application to Misbehaving Node Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp. 49, 56, 13-16 May 2014. doi: 10.1109/AINA.2014.13 In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Keywords: emergency services; military communication; mobile computing; protocols; quality of service; telecommunication security; trusted computing; COI dynamic hierarchical trust management protocol; COI mission-oriented mobile group management; aerial vehicles; agility enhancement; application performance maximization; communication-device-carried personnel; community-of-interest mobile groups; competence; connectivity; cooperativeness; emergency response situations; ground vehicles; heterogeneous mobile entities; heterogeneous mobile environments; honesty; intimacy ;intrusion tolerance; military operation; misbehaving node population; node density; quality-of-service characters; robots; social behaviors; survivable COI management protocol; trust measurement; trust-based misbehaving node detection; Equations; Mathematical model; Mobile communication; Mobile computing; Peer-to-peer computing; Protocols; Quality of service; Trust management; adaptability; community of interest; intrusion detection; performance analysis; scalability (ID#: 15-3647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838647&isnumber=6838626
Myalapalli, V.K.; Chakravarthy, A.S.N., "A Unified Model For Cherishing Privacy In Database System An Approach To Overhaul Vulnerabilities," Networks & Soft Computing (ICNSC), 2014 First International Conference on, pp.263,266, 19-20 Aug. 2014. doi: 10.1109/CNSC.2014.6906658 Privacy is the most anticipated aspect in many perspectives especially with sensitive data and the database is being targeted incessantly for vulnerability. The database must be persistently monitored for ensuring comprehensive security. The proposed model is intended to cherish the database privacy by thwarting intrusions and inferences. The Database Static protection and Intrusion Tolerance Subsystem proposed in the architecture bolster this practice. This paper enunciates Privacy Cherished Database architecture model and how it achieves security under sundry circumstances.
Keywords: data privacy; database management systems; security of data; database static protection; database system privacy; inference thwarting; intrusion thwarting; intrusion tolerance subsystem; privacy cherished database architecture model; security; Decision support systems; Handheld computers; Database Security; Database Security Configurations; Inference Detection; Intrusion detection; security policy (ID#: 15-3648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906658&isnumber=6906636
Wenbing Zhao, "Application-Aware Byzantine Fault Tolerance," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp.45,50, 24-27 Aug. 2014 doi: 10.1109/DASC.2014.17 Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
Keywords: client-server systems; concurrency control; finite state machines; security of data; software fault tolerance; Byzantine agreement; application semantics; application-aware Byzantine fault tolerance; computer system intrusion resilience enhancement; deterministic application processing; replica nondeterminism; request concurrent processing; sequential execution; state-machine-based Byzantine fault tolerance algorithm; totally ordered request; Algorithm design and analysis; Fault tolerance; Fault tolerant systems; Message systems; Semantics; Servers; System recovery; Application Nondeterminism; Application Semantics; Application-Aware Byzantine Fault Tolerance; Deferred Byzantine Agreement; Dependability; Intrusion Resilience (ID#: 15-3649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945302&isnumber=6945641
Fonseca, J.; Seixas, N.; Vieira, M.; Madeira, H., "Analysis of Field Data on Web Security Vulnerabilities," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.2, pp.89, 100, March-April 2014 doi: 10.1109/TDSC.2013.37 Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used Web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers.
Keywords: Internet; SQL; security of data; software fault tolerance; source code (software); SQL injection; Web application vulnerabilities; Web security vulnerabilities; XSS; attack injectors; code inspectors; field data analysis; intrusion detection systems; realistic vulnerability; security mechanisms; security patches; software faults; source code; static code analyzers; vulnerability scanners; Awards activities; Blogs; Internet; Java; Security; Software; Internet applications; Security; languages; review and evaluation (ID#: 15-3650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6589556&isnumber=6785951
Hua Chai; Wenbing Zhao, "Towards Trustworthy Complex Event Processing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, pp.758,761, 27-29 June 2014. doi: 10.1109/ICSESS.2014.6933677 Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.
Keywords: Big Data; trusted computing; Big Data; actionable situational knowledge; algorithmic stock trading; application semantics; business intelligence; complex event processing; event processing operations ;intelligent computing; intrusion detection; minimum runtime overhead; mission-critical applications; servers; soft realtime computing; threat analysis; trustworthy; Business; Context; Fault tolerance; Fault tolerant systems; Runtime; Servers; Synchronization; Big Data; Business Intelligence; Byzantine Fault Tolerance; Complex Event Processing; Dependable Computing; Trust (ID#: 15-3652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933677&isnumber=6933501
Fonseca, J.; Vieira, M.; Madeira, H., "Evaluation of Web Security Mechanisms Using Vulnerability & Attack Injection," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no.5, pp.440, 453, Sept.-Oct. 2014. doi: 10.1109/TDSC.2013.45 In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.
Keywords: Internet; SQL; fault diagnosis; security of data; software fault tolerance; SQL Injection attacks; VAIT; Web application security mechanism evaluation; attack injection methodology; fault injection; intrusion detection system; vulnerability injection methodology; vulnerability-&-attack injector tool; Databases; Educational institutions; Input variables; Probes; Security; Software; TV; Security; fault injection; internet applications; review and evaluation (ID#: 15-3653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6629992&isnumber=6893064
Kirsch, J.; Goose, S.; Amir, Y.; Dong Wei; Skare, P., "Survivable SCADA Via Intrusion-Tolerant Replication," Smart Grid, IEEE Transactions on vol. 5, no. 1, pp. 60, 70, Jan. 2014. doi: 10.1109/TSG.2013.2269541 Providers of critical infrastructure services strive to maintain the high availability of their SCADA systems. This paper reports on our experience designing, architecting, and evaluating the first survivable SCADA system-one that is able to ensure correct behavior with minimal performance degradation even during cyber attacks that compromise part of the system. We describe the challenges we faced when integrating modern intrusion-tolerant protocols with a conventional SCADA architecture and present the techniques we developed to overcome these challenges. The results illustrate that our survivable SCADA system not only functions correctly in the face of a cyber attack, but that it also processes in excess of 20 000 messages per second with a latency of less than 30 ms, making it suitable for even large-scale deployments managing thousands of remote terminal units.
Keywords: SCADA systems; fault tolerance; production engineering computing; security of data; SCADA architecture; cyberattacks; intrusion-tolerant protocols; intrusion-tolerant replication; performance degradation; survivable SCADA system; Clocks; Libraries; Monitoring; Protocols; SCADA systems; Servers; Synchronization; Cyberattack; SCADA systems; fault tolerance; reliability; resilience; survivability (ID#: 15-3654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6576306&isnumber=6693741
Di Benedetto, M.D.; D'Innocenzo, A.; Smarra, F., "Fault-tolerant Control Of A Wireless HVAC Control System," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp.235,238, 21-23 May 2014. doi: 10.1109/ISCCSP.2014.6877858 In this paper we address the problem of designing a fault tolerant control scheme for an HVAC control system where sensing and actuation data are exchanged with a centralized controller via a wireless sensors and actuators network where the communication nodes are subject to permanent failures and malicious intrusions.
Keywords: HVAC; actuators; building management systems; failure analysis; fault tolerant control; wireless sensor networks; actuators network; centralized controller; communication nodes; fault tolerant control scheme; fault-tolerant control; malicious intrusions; permanent failures; sensing and actuation data; wireless HVAC control system; wireless sensors; Atmospheric modeling; Control systems; Fault tolerance; Fault tolerant systems; Sensors; Wireless communication; Wireless sensor networks; Building automation; fault detection; wireless sensor networks (ID#: 15-3655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877858&isnumber=6877795
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Power Grid Security |
Cyber-Physical Systems such as the power grid are complex networks linked with cyber capabilities. The complexity and potential consequences of cyber-attacks on the grid make them an important area for scientific research. The articles cited below appeared in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Software Tamper Resistance |
Software tampering and reverse engineering of code create financial concern for software developers, as well as introducing access for malicious injections. The three articles cited here from 2014 address code obfuscation, AES and fault analysis.
Yoshikawa, M.; Goto, H.; Asahi, K., "Error Value Driven Fault Analysis Attack," Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2014 15th IEEE/ACIS International Conference on, pp. 1, 4, June 30 2014-July 2 2014. doi: 10.1109/SNPD.2014.6888689 The advanced encryption standard (AES) has been sufficiently studied to confirm that its decryption is computationally impossible. However, its vulnerability against fault analysis attacks has been pointed out in recent years. To verify the vulnerability of electronic devices in the future, into which cryptographic circuits have been incorporated, fault analysis attacks must be thoroughly studied. The present study proposes a new fault analysis attack method which utilizes the tendency of an operation error due to a glitch. The present study also verifies the validity of the proposed method by performing evaluation experiments using FPGA.
Keywords: cryptography; field programmable gate arrays; AES; advanced encryption standard; cryptographic circuits; error value driven fault analysis attack method; Ciphers; Circuit faults; Encryption; Equations; Field programmable gate arrays; Standards; Error value; Fault analysis attacks; Side-channel attack; Tamper resistance (ID#: 15-3656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888689&isnumber=6888665
Ketenci, S.; Ulutas, G.; Ulutas, M., "Detection of Duplicated Regions In Images Using 1D-Fourier Transform," Systems, Signals and Image Processing (IWSSIP), 2014 International Conference on, pp.171,174, 12-15 May 2014 Large number of digital images and videos are acquired, stored, processed and shared nowadays. High quality imaging hardware and low cost, user friendly image editing software make digital mediums vulnerable to modifications. One of the most popular image modification techniques is copy move forgery. This tampering technique copies part of an image and pastes it into another part on the same image to conceal or to replicate some part of the image. Researchers proposed many techniques to detect copy move forged regions of images recently. These methods divide image into overlapping blocks and extract features to determine similarity among group of blocks. Selection of the feature extraction algorithm plays an important role on the accuracy of detection methods. Column averages of 1D-FT of rows is used to extract features from overlapping blocks on the image. Blocks are transformed into frequency domain using 1D-FT of the rows and average values of the transformed columns form feature vectors. Similarity of feature vectors indicates possible forged regions. Results show that the proposed method can detect copy pasted regions with higher accuracy compared to similar works reported in the literature. The method is also more resistant against the Gaussian blurring or JPEG compression attacks as shown in the results.
Keywords: Fourier transforms; feature extraction; frequency-domain analysis ;image recognition;1D-Fourier transform; Gaussian blurring; JPEG compression attacks; copy move forged region detection; digital images; digital mediums; duplicated region detection; feature extraction algorithm; feature vector similarity; frequency domain; high quality imaging hardware; image modification techniques; overlapping blocks; tampering technique; user friendly image editing software; Authentication; Digital images; Image coding; Resistance; Copy move forgery; Fourier transform; Gaussian Blurring (ID#: 15-3657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837658&isnumber=6837609
Kulkarni, A.; Metta, R., "A New Code Obfuscation Scheme for Software Protection," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.409, 414, 7-11 April 2014 doi: 10.1109/SOSE.2014.57 IT industry loses tens of billions of dollars annually from security attacks such as tampering and malicious reverse engineering. Code obfuscation techniques counter such attacks by transforming code into patterns that resist the attacks. None of the current code obfuscation techniques satisfy all the obfuscation effectiveness criteria such as resistance to reverse engineering attacks and state space increase. To address this, we introduce new code patterns that we call nontrivial code clones and propose a new obfuscation scheme that combines nontrivial clones with existing obfuscation techniques to satisfy all the effectiveness criteria. The nontrivial code clones need to be constructed manually, thus adding to the development cost. This cost can be limited by cloning only the code fragments that need protection and by reusing the clones across projects. This makes it worthwhile considering the security risks. In this paper, we present our scheme and illustrate it with a toy example.
Keywords: computer crime; reverse engineering; software engineering; systems re-engineering; IT industry; code fragment cloning; code obfuscation scheme; code patterns; code transformation; malicious reverse engineering; nontrivial code clones; security attacks; software protection; tampering; Cloning; Complexity theory; Data processing; Licenses; Resistance; Resists; Software (ID#: 15-3658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830939&isnumber=6825948
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Theoretical Foundations for Software |
Theory work helps enhance our understanding of basic principles. Much interest has developed around the theoretical foundations of software which have direct and indirect implications for cyber security. The research cited here appeared in 2014 and include such topics as malware propagation and mutant measurements.
Shigen Shen; Hongjie Li; Risheng Han; Vasilakos, A.V.; Yihan Wang; Qiying Cao, "Differential Game-Based Strategies for Preventing Malware Propagation in Wireless Sensor Networks," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 11, pp.1962,1973, Nov. 2014. doi: 10.1109/TIFS.2014.2359333 Wireless sensor networks (WSNs) are prone to propagating malware because of special characteristics of sensor nodes. Considering the fact that sensor nodes periodically enter sleep mode to save energy, we develop traditional epidemic theory and construct a malware propagation model consisting of seven states. We formulate differential equations to represent the dynamics between states. We view the decision-making problem between system and malware as an optimal control problem; therefore, we formulate a malware-defense differential game in which the system can dynamically choose its strategies to minimize the overall cost whereas the malware intelligently varies its strategies over time to maximize this cost. We prove the existence of the saddle-point in the game. Further, we attain optimal dynamic strategies for the system and malware, which are bang-bang controls that can be conveniently operated and are suitable for sensor nodes. Experiments identify factors that influence the propagation of malware. We also determine that optimal dynamic strategies can reduce the overall cost to a certain extent and can suppress the malware propagation. These results support a theoretical foundation to limit malware in WSNs.
Keywords: bang-bang control; differential games; invasive software; telecommunication control; telecommunication security; wireless sensor networks; WSN; bang-bang controls; decision-making problem; differential equations; differential game-based strategy; malware propagation model; malware propagation prevention; malware-defense differential game; optimal control problem; optimal dynamic strategy; overall cost minimization; saddle-point; sensor node characteristics; sleep mode; traditional epidemic theory; wireless sensor networks; Control systems; Games; Grippers; Malware; Silicon; Wireless sensor networks; Differential game; Malware propagation; epidemic theory; wireless sensor networks (ID#: 15-3659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6905838&isnumber=6912034
Baraldi, A.; Boschetti, L.; Humber, M.L., "Probability Sampling Protocol for Thematic and Spatial Quality Assessment of Classification Maps Generated From Spaceborne/Airborne Very High Resolution Images," Geoscience and Remote Sensing, IEEE Transactions on, vol. 52, no. 1, pp.701,760, Jan. 2014. doi: 10.1109/TGRS.2013.2243739 To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.
Keywords: decision trees; geographic information systems; geophysical image processing ;image classification; measurement uncertainty; probability; quality assurance; remote sensing; sampling methods; DigitalGlobe; Global Earth Observation System of Systems;QA4EO international guidelines; Quality Assurance Framework for Earth Observation guidelines;QuickBird-2 image; SIAM preclassification maps; Satellite Image Automatic Mapper;WorldView-2 images; categorical variable pair similarity index; decision-tree; inclusion probability; measurement uncertainty; probability sampling protocol; procedural knowledge; quality assessment; spaceborne/airborne very high resolution images; structural knowledge; subsymbolic object-based spatial quality indicators; symbolic pixel-based thematic quality indicators; thematic maps; Accuracy; Earth; Estimation; Guidelines; Indexes; Protocols; Spatial resolution; Contingency matrix; error matrix; land cover change (LCC) detection; land cover classification; maps comparison; nonprobability sampling; ontology; overlapping area matrix (OAMTRX);probability sampling; quality indicator of operativeness (OQI);spatial quality indicator (SQI);taxonomy; thematic quality indicator (TQI)}, (ID#: 15-3660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6479283&isnumber=6675822
Cardoso, L.S.; Massouri, A.; Guillon, B.; Ferrand, P.; Hutu, F.; Villemaud, G.; Risset, T.; Gorce, J.-M., "CorteXlab: A Facility For Testing Cognitive Radio Networks In A Reproducible Environment," Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on , vol., no., pp.503,507, 2-4 June 2014. While many theoretical and simulation works have highlighted the potential gains of cognitive radio, several technical issues still need to be evaluated from an experimental point of view. Deploying complex heterogeneous system scenarios is tedious, time consuming and hardly reproducible. To address this problem, we have developed a new experimental facility, called CorteXlab, that allows complex multi-node cognitive radio scenarios to be easily deployed and tested by anyone in the world. Our objective is not to design new software defined radio (SDR) nodes, but rather to provide a comprehensive access to a large set of high performance SDR nodes. The CorteXlab facility offers a 167 m2 electromagnetically (EM) shielded room and integrates a set of 24 universal software radio peripherals (USRPs) from National Instruments, 18 PicoSDR nodes from Nutaq and 42 IoT-Lab wireless sensor nodes from Hikob. CorteXlab is built upon the foundations of the SensLAB testbed and is based the free and open-source toolkit GNU Radio. Automation in scenario deployment, experiment start, stop and results collection is performed by an experiment controller, called Minus. CorteXlab is in its final stages of development and is already capable of running test scenarios. In this contribution, we show that CorteXlab is able to easily cope with the usual issues faced by other testbeds providing a reproducible experiment environment for CR experimentation.
Keywords: Internet of Things; cognitive radio; controllers; electromagnetic shielding; software radio; testing; wireless sensor networks; CorteXlab facility; Hikob; IoT-Lab wireless sensor nodes; Minus; National Instruments; Nutaq; PicoSDR nodes; SDR nodes; SensLAB; cognitive radio networks; complex heterogeneous system scenarios; complex multinode cognitive radio scenarios; controller; electromagnetically shielded room; open-source toolkit GNU Radio; reproducible environment; software defined radio; testing facility; universal software radio peripherals; Cognitive radio; Field programmable gate arrays; Interference; MIMO; Orbits; Wireless sensor networks (ID#: 15-3661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849736&isnumber=6849647
Chang, Lichen, "Convergence of Physical System And Cyber System Modeling Methods For Aviation Cyber Physical Control System," Information and Automation (ICIA), 2014 IEEE International Conference on, pp. 542, 547, 28-30 July 2014. doi: 10.1109/ICInfA.2014.6932714 Recent attention to aviation cyber physical systems (ACPS) is driven by the need for seamless integration of design disciplines that dominate physical world and cyber world convergence. System convergence is a big obstacle to good aviation cyber-physical system (ACPS) design, which is due to a lack of an adequate scientific theoretical foundation for the subject. The absence of a good understanding of the science of aviation system convergence is not due to neglect, but rather due to its difficulty. Most complex aviation system builders have abandoned any science or engineering discipline for system convergence they simply treat it as a management problem. Aviation System convergence is almost totally absent from software engineering and engineering curricula. Hence, system convergence is particularly challenging in ACPS where fundamentally different physical and computational design concerns intersect. In this paper, we propose an integrated approach to handle System convergence of aviation cyber physical systems based on multi-dimensions, multi-views, multi-paradigm and multiple tools. This model-integrated development approach addresses the development needs of cyber physical systems through the pervasive use of models, and physical world, cyber world can be specified and modeled together, cyber world and physical world can be converged entirely, and cyber world models and physical world model can be integrated seamlessly. The effectiveness of the approach is illustrated by means of one practical case study: specifying and modeling Aircraft Systems. In this paper, We specify and model Aviation Cyber-Physical Systems with integrating Modelica, Modelicaml and Architecture Analysis & Design Language (AADL), the physical world is modeled by Modelica and Modelicaml, the cyber part is modeled by AADL and Modelicaml.
Keywords: Aerospace control; Aircraft; Analytical models; Atmospheric modeling; Convergence; Mathematical model; Unified modeling language; AADL; Aviation Cyber Physical System; Dynamic Continuous Features; Modelica; Modelicaml; Spatial-Temporal Features (ID#: 15-3662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932714&isnumber=6932615
Hummel, M., "State-of-the-Art: A Systematic Literature Review on Agile Information Systems Development," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4712,4721, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.579 Principles of agile information systems development (ISD) have attracted the interest of practice as well as research. The goal of this literature review is to validate, update and extend previous reviews in terms of the general state of research on agile ISD. Besides including categories such as the employed research methods and data collection techniques, the importance of theory is highlighted by evaluating the theoretical foundations and contributions of former studies. Since agile ISD is rooted in the IS as well as software engineering discipline, important outlets of both disciplines are included in the search process, resulting in 482 investigated papers. The findings show that quantitative studies and the theoretical underpinnings of agile ISD are lacking. Extreme Programming is still the most researched agile ISD method, and more efforts on Scrum are needed. In consequence, multiple research gaps that need further research attention are identified.
Keywords: software prototyping; Scrum; agile ISD; agile information systems development; data collection techniques; extreme programming; software engineering discipline; Abstracts; Data collection; Interviews; Programming; Systematics; Testing (ID#: 15-3663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759181&isnumber=6758592
Ammann, P.; Delamaro, M.E.; Offutt, J., "Establishing Theoretical Minimal Sets of Mutants," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, pp.21,30, March 31 2014-April 4 2014. doi: 10.1109/ICST.2014.13 Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
Keywords: minimisation; program testing; set theory; mutant set minimization; mutation analysis; mutation score; redundant mutants; test cases; Benchmark testing; Computational modeling; Context; Electronic mail; Heuristic algorithms; Minimization; Mutation testing; dynamic subsumption; minimal mutant sets (ID#: 15-3664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823862&isnumber=6823846
Achouri, A.; Hlaoui, Y.B.; Jemni Ben Ayed, L., "Institution Theory for Services Oriented Applications," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.516,521, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.86 In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.
Keywords: Unified Modeling Language; diagrams; formal specification; formal verification; programming language semantics; software engineering; UML AD;UML activity diagram; event-B specification; formal verification; graphical model; institution comorphism; institution theory; local semantic; semantic correctness; service oriented applications; workflow applications; Context; Grammar; Manganese; Semantics; Syntactics; System recovery; Unified modeling language; Event-B; Formal semantics; Institution theory; Model transformation; UML Activity Diagram (ID#: 15-3665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903182&isnumber=6903069
Lerchner, H.; Stary, C., "An Open S-BPM Runtime Environment Based on Abstract State Machines," Business Informatics (CBI), 2014 IEEE 16th Conference on, vol. 1, pp. 54, 61, 14-17 July 2014. doi: 10.1109/CBI.2014.24 The paradigm shift from traditional BPM to Subject-oriented BPM (S-BPM) is accounted to identifying independently acting subjects. As such, they can perform arbitrary actions on arbitrary objects. Abstract State Machines (ASMs) work on a similar basis. Exploring their capabilities with respect to representing and executing S-BPM models strengthens the theoretical foundations of S-BPM, and thus, validity of S-BPM tools. Moreover it enables coherent intertwining of business process modeling with executing of S-BPM representations. In this contribution we introduce the framework and roadmap tackling the exploration of the ASM approach in the context of S-BPM. We also report the major result, namely the implementation of an executable workflow engine with an Abstract State Machine interpreter based on an existing abstract interpreter model for S-BPM (applying the ASM refinement concept). This workflow engine serves as a baseline and reference implementation for further language and processing developments, such as simulation tools, as it has been developed within the Open-S-BPM initiative.
Keywords: business data processing; finite state machines; program interpreters; workflow management software; ASM approach; Open S-BPM runtime environment; S-BPM model; S-BPM tools; abstract interpreter model; abstract state machine interpreter; business process modeling; executable workflow engine ;subject-oriented BPM; Abstracts; Analytical models; Business; Engines; Mathematical model; Semantics; Abstract State Machine; CoreASM; Open-S-BPM; Subject-oriented Business Process Management; workflow engine (ID#: 15-3670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6904137&isnumber=6904121
Poberezhskiy, Y.S.; Poberezhskiy, G.Y., "Impact of the Sampling Theorem Interpretations on Digitization and Reconstruction in SDRs and CRs," Aerospace Conference, 2014 IEEE, pp. 1, 20, 1-8 March 2014. doi: 10.1109/AERO.2014.6836423 Sampling and reconstruction (S&R) are used in virtually all areas of science and technology. The classical sampling theorem is a theoretical foundation of S&R. However, for a long time, only sampling rates and ways of the sampled signals representation were derived from it. The fact that the design of S&R circuits (SCs and RCs) is based on a certain interpretation of the sampling theorem was mostly forgotten. The traditional interpretation of this theorem was selected at the time of the theorem introduction because it offered the only feasible way of S&R realization then. At that time, its drawbacks did not manifest themselves. By now, this interpretation has largely exhausted its potential and inhibits future progress in the field. This tutorial expands the theoretical foundation of S&R. It shows that the traditional interpretation, which is indirect, can be replaced by the direct one or by various combinations of the direct and indirect interpretations that enable development of novel SCs and RCs (NSCs and NRCs) with advanced properties. The tutorial explains the basic principles of the NSCs and NRCs design, their advantages, as well as theoretical problems and practical challenges of their realization. The influence of the NSCs and NRCs on the architectures of SDRs and CRs is also discussed.
Keywords: analogue-digital conversion; cognitive radio; signal reconstruction; signal representation; signal sampling; software radio; CR; NRC design; NSC design; S&R circuits; SDR; cognitive radio; sampled signal representation; sampling and reconstruction; sampling rates; sampling theorem interpretation; software defined radio; Band-pass filters; Bandwidth; Barium; Baseband; Digital signal processing; Equations; Interference (ID#: 15-3671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836423&isnumber=6836156
Chen, Qingyi; Kang, Hongwei; Zhou, Hua; Sun, Xingping; Shen, Yong; Jin, YunZhi; Yin, Jun, "Research on Cloud Computing Complex Adaptive Agent," Service Systems and Service Management (ICSSSM), 2014 11th International Conference on, pp.1,4, 25-27 June 2014. doi: 10.1109/ICSSSM.2014.6943342 It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
Keywords: Adaptation models; Adaptive systems; Business; Cloud computing; Complexity theory; Computational modeling; Economics; cloud computing; complex adaptive system; service agent (ID#: 15-3672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6943342&isnumber=6874015
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Time Frequency Analysis |
A search for articles combining research in time frequency analysis and security produced nearly 3500 results. The works cited here only scratch the surface. They appear to have useful implications for the science of security.
Koga, H.; Honjo, S., "A Secret Sharing Scheme Based On A Systematic Reed-Solomon Code And Analysis Of Its Security For A General Class Of Sources," Information Theory (ISIT), 2014 IEEE International Symposium on, pp. 1351, 1355, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875053 In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SL and n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.
Keywords: Reed-Solomon codes; telecommunication security; linear equations ;ramp secret sharing scheme; shorten systematic Reed-Solomon code; Cryptography; Equations; Probability distribution; Random variables; Reed-Solomon codes (ID#: 15-3673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875053&isnumber=6874773
Liu, Y.; Hatzinakos, D., "Earprint: Transient Evoked Otoacoustic Emission for Biometrics," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 12, pp. 2291, 2301, Dec. 2014. doi: 10.1109/TIFS.2014.2361205 Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.
Keywords: Auditory system; Biometrics (access control); Ear; Feature extraction; Probes; Time-frequency analysis; Vectors; Robust Biometric Modality; Robust biometric modality; Time-frequency Analysis; Transient Evoked Otoacoustic Emission; biometric fusion ;linear discriminant analysis; time-frequency analysis; transient evoked otoacoustic emission (ID#: 15-3674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914592&isnumber=6953163
Guang Hua; Goh, J.; Thing, V.L.L., "A Dynamic Matching Algorithm for Audio Timestamp Identification Using the ENF Criterion," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 7, pp.1045, 1055, July 2014. doi: 10.1109/TIFS.2014.2321228 The electric network frequency (ENF) criterion is a recently developed technique for audio timestamp identification, which involves the matching between extracted ENF signal and reference data. For nearly a decade, conventional matching criterion has been based on the minimum mean squared error (MMSE) or maximum correlation coefficient. However, the corresponding performance is highly limited by low signal-to-noise ratio, short recording durations, frequency resolution problems, and so on. This paper presents a threshold-based dynamic matching algorithm (DMA), which is capable of autocorrecting the noise affected frequency estimates. The threshold is chosen according to the frequency resolution determined by the short-time Fourier transform (STFT) window size. A penalty coefficient is introduced to monitor the autocorrection process and finally determine the estimated timestamp. It is then shown that the DMA generalizes the conventional MMSE method. By considering the mainlobe width in the STFT caused by limited frequency resolution, the DMA achieves improved identification accuracy and robustness against higher levels of noise and the offset problem. Synthetic performance analysis and practical experimental results are provided to illustrate the advantages of the DMA.
Keywords: Fourier transforms; audio recording; correlation methods; frequency estimation; mean square error methods; ENF criterion; MMSE;S TFT; audio timestamp identification; autocorrection process; dynamic matching; electric network frequency criterion; extracted ENF signal; frequency estimates; frequency resolution problems; maximum correlation coefficient; minimum mean squared error; reference data; short recording durations; short-time Fourier transform; signal-to-noise ratio; window size; Correlation; Estimation; Frequency estimation; Signal resolution; Signal to noise ratio; Time-frequency analysis; Electric network frequency (ENF); audio authentication; audio forensics; timestamp identification (ID#: 15-3675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808537&isnumber=6819111
Sousa, J.; Vilela, J.P., "A Characterization Of Uncoordinated Frequency Hopping For Wireless Secrecy," Wireless and Mobile Networking Conference (WMNC), 2014 7th IFIP, pp.1,4, 20-22 May 2014 doi: 10.1109/WMNC.2014.6878885 We characterize the secrecy level of communication under Uncoordinated Frequency Hopping, a spread spectrum scheme where a transmitter and a receiver randomly hop through a set of frequencies with the goal of deceiving an adversary. In our work, the goal of the legitimate parties is to land on a given frequency without the adversary eavesdroppers doing so, therefore being able to communicate securely in that period, that may be used for secret-key exchange. We also consider the effect on secrecy of the availability of friendly jammers that can be used to obstruct eavesdroppers by causing them interference. Our results show that tuning the number of frequencies and adding friendly jammers are effective countermeasures against eavesdroppers.
Keywords: cryptography; jamming; radio receivers; radio transmitters; spread spectrum communication; telecommunication security; communication secrecy level; interference; secret-key exchange; spread spectrum scheme; uncoordinated frequency hopping characterization; wireless secrecy; wireless transmissions; Interference; Jamming; Security; Spread spectrum communication; Throughput; Time-frequency analysis; Wireless communication (ID#: 15-3676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878885&isnumber=6878843
Esquef, P.A.A.; Apolinario, J.A.; Biscainho, L.W.P., "Edit Detection in Speech Recordings via Instantaneous Electric Network Frequency Variations," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 12, pp. 2314, 2326, Dec. 2014. doi: 10.1109/TIFS.2014.2363524 In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.
Keywords: Databases; Estimation; Forensics; Frequency estimation; Noise; Noise measurement; Time-frequency analysis; Acoustical signal processing; edit detection; instantaneous frequency; spectral analysis; voice activity detection (ID#: 15-3677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6926817&isnumber=6953163
Pukkawanna, S.; Hazeyama, H.; Kadobayashi, Y.; Yamaguchi, S., "Investigating the Utility Of S-Transform For Detecting Denial-Of-Service And Probe Attacks," Information Networking (ICOIN), 2014 International Conference on, pp.282, 287, 10-12 Feb. 2014. doi: 10.1109/ICOIN.2014.6799482 Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.
Keywords: Internet; computer network security; telecommunication traffic; time-frequency analysis; wavelet transforms; DoS attacks;I DS; Internet; Otsu method; S-transform; accurate attack signatures; denial-of-service detection; frequency components; image processing method; intrusion detection systems; probe attacks; signal processing method; time-frequency representation technique; traffic signal; two-dimensional image; wavelet transform; Computer crime; Internet; Ports (Computers);Probes; Time-frequency analysis; Wavelet transforms (ID#: 15-3678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799482&isnumber=6799467
Rahayu, T.M.; Sang-Gon Lee; Hoon-Jae Lee, "Security Analysis Of Secure Data Aggregation Protocols In Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.471, 474, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779005 In order to conserve wireless sensor network (WSN) lifetime, data aggregation is applied. Some researchers consider the importance of security and propose secure data aggregation protocols. The essential of those secure approaches is to make sure that the aggregators aggregate the data in appropriate and secure way. In this paper we give the description of ESPDA (Energy-efficient and Secure Pattern-based Data Aggregation) and SRDA (Secure Reference-Based Data Aggregation) protocol that work on cluster-based WSN and the deep security analysis that are different from the previously presented one.
Keywords: protocols; telecommunication security; wireless sensor networks; ESPDA protocol; SRDA protocol; WSN lifetime; cluster-based WSN; deep security analysis; energy-efficient and secure pattern-based data aggregation protocol; secure reference-based data aggregation protocol; wireless sensor network lifetime; Authentication; Cryptography; Energy efficiency; Peer-to-peer computing; Protocols; Wireless sensor networks; Data aggregation protocol; ESPDA; SRDA; WSN; secure data aggregation protocol}, (ID#: 15-3679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779005&isnumber=6778899
Rezvani, M.; Ignjatovic, A.; Bertino, E.; Jha, S., "Provenance-Aware Security Risk Analysis For Hosts And Network Flows," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp.1, 8, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838250 Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
Keywords: computer network security; risk analysis; deep packet inspection; high risk hosts; high risk network flows; provenance aware security risk analysis; risk score; Computational modeling; Educational institutions; Iterative methods; Monitoring; Ports (Computers); Risk management; Security (ID#: 15-3680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838250&isnumber=6838210
Zahid, A.; Masood, R.; Shibli, M.A., "Security of Sharded NoSQL Databases: A Comparative Analysis," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp. 1, 8, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861323 NoSQL databases are easy to scale-out because of their flexible schema and support for BASE (Basically Available, Soft State and Eventually Consistent) properties. The process of scaling-out in most of these databases is supported by sharding which is considered as the key feature in providing faster reads and writes to the database. However, securing the data sharded over various servers is a challenging problem because of the data being distributedly processed and transmitted over the unsecured network. Though, extensive research has been performed on NoSQL sharding mechanisms but no specific criterion has been defined to analyze the security of sharded architecture. This paper proposes an assessment criterion comprising various security features for the analysis of sharded NoSQL databases. It presents a detailed view of the security features offered by NoSQL databases and analyzes them with respect to proposed assessment criteria. The presented analysis helps various organizations in the selection of appropriate and reliable database in accordance with their preferences and security requirements.
Keywords: SQL; security of data; BASE; NoSQL sharding mechanisms; assessment criterion; security features; sharded NoSQL databases; Access control; Authentication; Distributed databases; Encryption; Servers; Comparative Analysis; Data and Applications Security; Database Security; NoSQL; Sharding (ID#: 15-3681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861323&isnumber=6861314
Hongzhen Du; Qiaoyan Wen, "Security Analysis Of Two Certificateless Short Signature Schemes," Information Security, IET, vol. 8, no.4, pp.230, 233, July 2014. doi: 10.1049/iet-ifs.2013.0080 Certificateless public key cryptography (CL-PKC) combines the advantage of both traditional PKC and identity-based cryptography (IBC) as it eliminates the certificate management problem in traditional PKC and resolves the key escrow problem in IBC. Recently, Choi et al. and Tso et al. proposed two different efficient CL short signature schemes and claimed that the two schemes are secure against super adversaries and satisfy the strongest security. In this study, the authors show that both Choi et al.'s scheme and Tso et al.'s scheme are insecure against the strong adversaries who can replace users' public keys and have access to the signing oracle under the replaced public keys.
Keywords: digital signatures; public key cryptography; CL short signature schemes; CL-PKC; IBC; certificate management problem; certificateless public key cryptography; certificateless short signature schemes; identity-based cryptography; key escrow problem; security analysis; user public keys (ID#: 15-3682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842408&isnumber=6842405
Wenli Liu; Xiaolong Zheng; Tao Wang; Hui Wang, "Collaboration Pattern and Topic Analysis on Intelligence and Security Informatics Research," Intelligent Systems, IEEE, vol.29, no. 3, pp. 39, 46, May-June 2014. doi: 10.1109/MIS.2012.106 In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.
Keywords: groupware; security of data; social networking (online);collaboration pattern; intelligence and security informatics research; scale-free property; small-world effect; social network analysis approach; topic analysis; Collaboration; Computer security; Informatics; Intelligent systems; Network security; Social network services; Terrorism; ISI; Intelligence and Security Informatics; intelligent systems; social network analysis; topic analysis (ID#: 15-3683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6357170&isnumber=6871688
Sasidharan, B.; Kumar, P.V.; Shah, N.B.; Rashmi, K.V.; Ramachandran, K., "Optimality of the Product-Matrix Construction For Secure MSR Regenerating Codes," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp. 10, 14, 21-23 May 2014. doi: 10.1109/ISCCSP.2014.6877804 In this paper, we consider the security of exact-repair regenerating codes operating at the minimum-storage-regenerating (MSR) point. The security requirement (introduced in Shah et. al.) is that no information about the stored data file must be leaked in the presence of an eavesdropper who has access to the contents of ℓ1 nodes as well as all the repair traffic entering a second disjoint set of ℓ2 nodes. We derive an upper bound on the size of a data file that can be securely stored that holds whenever ℓ2 ≤ d - k + 1. This upper bound proves the optimality of the product-matrix-based construction of secure MSR regenerating codes by Shah et. al.
Keywords: encoding; matrix algebra; MSR point; data file; eavesdropper; exact repair regenerating code security; minimum storage regenerating point; product matrix; product matrix construction; repair traffic; secure MSR regenerating codes; Bandwidth; Data collection; Entropy; Maintenance engineering; Random variables; Security; Upper bound; MSR codes; Secure regenerating codes; product-matrix construction; regenerating codes (ID#: 15-3684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877804&isnumber=6877795
Mirmohseni, M.; Papadimitratos, P., "Scaling Laws For Secrecy Capacity In Cooperative Wireless Networks," INFOCOM, 2014 Proceedings IEEE, pp.1527, 1535, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848088 We investigate large wireless networks subject to security constraints. In contrast to point-to-point, interference-limited communications considered in prior works, we propose active cooperative relaying based schemes. We consider a network with nl legitimate nodes and ne eavesdroppers, and path loss exponent α ≥ 2. As long as ne2(log(ne))γ = o(nl) holds for some positive γ, we show one can obtain unbounded secure aggregate rate. This means zero-cost secure communication, given a fixed total power constraint for the entire network. We achieve this result with (i) the source using Wyner randomized encoder and a serial (multi-stage) block Markov scheme, to cooperate with the relays, and (ii) the relays acting as a virtual multi-antenna to apply beamforming against the eavesdroppers. Our simpler parallel (two-stage) relaying scheme can achieve the same unbounded secure aggregate rate when neα/2 + 1 (log(ne))γ+δ(α/2+1) = o(nl) holds, for some positive γ, δ.
Keywords: Markov processes; array signal processing; cooperative communication; interference (signal); relay networks (telecommunication); telecommunication security; Wyner randomized encoder; active cooperative relaying; beamforming ;cooperative wireless networks; interference limited communications; parallel relaying scheme; path loss exponent; scaling laws; secrecy capacity; secure communication; serial block Markov scheme; Aggregates; Array signal processing; Encoding; Relays; Tin; Transmitters; Wireless networks (ID#: 15-3685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848088&isnumber=6847911
Yao, H.; Silva, D.; Jaggi, S.; Langberg, M., "Network Codes Resilient to Jamming and Eavesdropping," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp. 1, 1, February 2014. doi: 10.1109/TNET.2013.2294254 We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by $ Z_{O}$ ) and the number of links he can eavesdrop on (denoted by $ Z_{I}$) is less than the network capacity (denoted by $ C$) (i.e., $ Z_{O}+ Z_{I}< C$), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of $ C- Z_{O}$ when there are no security requirements, and codes that allow communication at the source rate of $ C- Z_{O}- Z_{I}$ while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Keywords: Error probability; Jamming; Network coding; Robustness; Transforms; Vectors; Achievable rates; adversary; error control; network coding; secrecy (ID#: 15-3686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730968&isnumber=4359146
Pasolini, G.; Dardari, D., "Secret Key Generation In Correlated Multi-Dimensional Gaussian Channels," Communications (ICC), 2014 IEEE International Conference on, pp.2171,2177, 10-14 June 2014. doi: 10.1109/ICC.2014.6883645 Wireless channel reciprocity can be successfully exploited as a common source of randomness for the generation of a secret key by two legitimate users willing to achieve confidential communications over a public channel. This paper presents an analytical framework to investigate the theoretical limits of secret-key generation when wireless multi-dimensional Gaussian channels are used as source of randomness. The intrinsic secrecy content of wide-sense stationary wireless channels in frequency, time and spatial domains is derived through asymptotic analysis as the number of observations in a given domain tends to infinity. Some significant case studies are presented where single and multiple antenna eavesdroppers are considered. In the numerical results, the role of signal-to-noise ratio, spatial correlation, frequency and time selectivity is investigated.
Keywords: Gaussian channels; antenna arrays; frequency-domain analysis; public key cryptography; radio networks; telecommunication security; time-domain analysis; wireless channels; analytical framework; asymptotic analysis; confidential communications; correlated multidimensional Gaussian channels; frequency domains; intrinsic secrecy content; multiple antenna eavesdroppers; public channel; secret key generation; signal-to-noise ratio; spatial correlation; spatial domains; time domains; time selectivity; wide-sense stationary wireless channels; wireless channel reciprocity; wireless networks; Communication system security; Covariance matrices; Security; Signal to noise ratio; Time-frequency analysis; Wireless communication (ID#: 15-3687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883645&isnumber=6883277
Guoyuan Lin; Danru Wang; Yuyu Bie; Min Lei, "MTBAC: A Mutual Trust Based Access Control Model In Cloud Computing," Communications, China, vol.11, no.4, pp.154, 162, April 2014. doi: 10.1109/CC.2014.6827577 As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes
Keywords: Web services; authorisation; cloud computing; virtualisation; MTBAC model; cloud computing environment; cloud computing security; cloud service node credibility; data security; mutual trust based access control model; mutual trust mechanism; mutual trust relationship; open conditions; scalable Web services; trust management; user behavior trust; virtualized Web services; Computational modeling; Reliability; Time-frequency analysis; MTBAC; access control; cloud computing; mutual trust mechanism; trust model (ID#: 15-3688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827577&isnumber=6827540
Chen, L.M.; Hsiao, S.-W.; Chen, M.C.; Liao, W., "Slow-Paced Persistent Network Attacks Analysis and Detection Using Spectrum Analysis," Systems Journal, IEEE, vol. PP, no. 99, pp.1, 12, September 2014. doi: 10.1109/JSYST.2014.2348567 A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.
Keywords: Discrete Fourier transforms; Grippers; Spectral analysis; Time series analysis; Time-domain analysis; Time-frequency analysis; Network security; persistent activity; slow-paced attack; spectrum analysis; time series (ID#: 15-3689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906240&isnumber=4357939
Kun Wen; Jiahai Yang; Fengjuan Cheng; Chenxi Li; Ziyu Wang; Hui Yin, "Two-stage Detection Algorithm For Roq Attack Based On Localized Periodicity Analysis Of Traffic Anomaly," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp.1,6, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911829 Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.
Keywords: computer network security; time-frequency analysis; RoQ attack; anomaly detection; autocorrelation analysis; denial of service attack; detection algorithm; detection systems; inhibit normal TCP flows; localized periodicity analysis; network traffic; reduction of quality; time-frequency analysis; traffic anomaly; wavelet multiresolution analysis method; Algorithm design and analysis; Computer crime; Correlation; Detection algorithms; Multiresolution analysis; RoQ attack; anomaly detection; misuse detection; network security; wavelet analysis (ID#: 15-3690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911829&isnumber=6911704
Yanbing Liu; Qingyun Liu; Ping Liu; Jianlong Tan; Li Guo, "A Factor-Searching-Based Multiple String Matching Algorithm For Intrusion Detection," Communications (ICC), 2014 IEEE International Conference on , vol., no., pp.653,658, 10-14 June 2014. doi: 10.1109/ICC.2014.6883393 Multiple string matching plays a fundamental role in network intrusion detection systems. Automata-based multiple string matching algorithms like AC, SBDM and SBOM are widely used in practice, but the huge memory usage of automata prevents them from being applied to a large-scale pattern set. Meanwhile, poor cache locality of huge automata degrades the matching speed of algorithms. Here we propose a space-efficient multiple string matching algorithm BVM, which makes use of bit-vector and succinct hash table to replace the automata used in factor-searching-based algorithms. Space complexity of the proposed algorithm is O(rm2 + ΣpϵP |p|), that is more space-efficient than the classic automata-based algorithms. Experiments on datasets including Snort, ClamAV, URL blacklist and synthetic rules show that the proposed algorithm significantly reduces memory usage and still runs at a fast matching speed. Above all, BVM costs less than 0.75% of the memory usage of AC, and is capable of matching millions of patterns efficiently.
Keywords: automata theory; security of data; string matching; AC; ClamAV; SBDM; SBOM; Snort; URL blacklist; automata-based multiple string matching algorithms; bit-vector; factor searching-based algorithms; factor-searching-based multiple string matching algorithm; huge memory usage; matching speed; network intrusion detection systems; space complexity; space-efficient multiple string matching algorithm BVM; succinct hash table; synthetic rules; Arrays; Automata; Intrusion detection; Pattern matching; Time complexity; automata; intrusion detection; multiple string matching; space-efficient (ID#: 15-3691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883393&isnumber=6883277
Van Vaerenbergh, S.; González, O.; Vía, J.; Santamaría, I., "Physical Layer Authentication Based On Channel Response Tracking Using Gaussian Processes," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.2410,2414, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854032 Physical-layer authentication techniques exploit the unique properties of the wireless medium to enhance traditional higher-level authentication procedures. We propose to reduce the higher-level authentication overhead by using a state-of-the-art multi-target tracking technique based on Gaussian processes. The proposed technique has the additional advantage that it is capable of automatically learning the dynamics of the trusted user's channel response and the time-frequency fingerprint of intruders. Numerical simulations show very low intrusion rates, and an experimental validation using a wireless test bed with programmable radios demonstrates the technique's effectiveness.
Keywords: {Gaussian processes; fingerprint identification; security of data; target tracking; telecommunication security; time-frequency analysis; wireless channels; Gaussian process; automatic learning; channel response tracking; higher level authentication overhead; higher level authentication procedure; intruder; multitarget tracking technique; numerical simulation; physical layer authentication; programmable radio; time-frequency fingerprint; trusted user channel response; wireless medium; wireless test bed; Authentication; Channel estimation; Communication system security; Gaussian processes; Time-frequency analysis; Trajectory; Wireless communication; Gaussian processes; multi-target tracking; physical-layer authentication; wireless communications (ID#: 15-3692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854032&isnumber=6853544
Baofeng Wu; Qingfang Jin; Zhuojun Liu; Dongdai Lin, "Constructing Boolean Functions With Potentially Optimal Algebraic Immunity Based On Additive Decompositions Of Finite Fields (Extended Abstract)," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1361,1365, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875055 We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.
Keywords: Boolean functions; combinatorial mathematics; cryptography; additive decomposition; algebraic immunity; binary strings; combinatorial conjecture; cryptographic significant Boolean functions; fast algebraic attacks ;finite field; generalized Tu-Deng functions; optimal algebraic degree; unbalanced functions; Additives; Boolean functions; Cryptography; Electronic mail; FAA; Information theory; Transforms (ID#: 15-3693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875055&isnumber=6874773
Luowei Zhou; Sucheng Liu; Weiguo Lu; Shuchang Hu, "Quasi-Steady-State Large-Signal Modelling Of DC–DC Switching Converter: Justification And Application For Varying Operating Conditions," Power Electronics, IET, vol.7, no.10, pp.2455, 2464, 10 2014. doi: 10.1049/iet-pel.2013.0487 Quasi-steady-state (QSS) large-signal models are often taken for granted in the analysis and design of DC-DC switching converters, particularly for varying operating conditions. In this study, the premise for the QSS is justified quantitatively for the first time. Based on the QSS, the DC-DC switching converter under varying operating conditions is reduced to the linear time varying systems model. Thereafter, the QSS concept is applied to analysis of frequency-domain properties of the DC-DC switching converters by using three-dimensional Bode plots, which is then utilised to the optimisation of the controller parameters for wide variations of input voltage and load resistance. An experimental prototype of an average-current-mode-controlled boost DC-DC converter is built to verify the analysis and design by both frequency-domain and time-domain measurements.
Keywords: Bode diagrams; DC-DC power convertors; electric current control; electric resistance; linear systems; optimisation; switching convertors; time-frequency analysis; time-varying systems; 3D Bode plots; DC-DC switching converter; QSS; controller parameter optimisation; current mode controlled boost DC-DC converter; frequency-domain measurement; linear time varying systems model; load resistance variation; operating conditions variation; quasi steady-state large signal modelling; time-domain measurement (ID#: 15-3694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919980&isnumber=6919884
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trust and Trustworthiness |
Trust is created in information security through cryptography to assure the identity of external parties. The works cited here have a strong emphasis on Bayesian methods and cloud environments. In addition, the new ISO/IEEE standard for security device identification, has been released as ISO/IEC/IEEE International Standard for Information technology -- Telecommunications and information exchange between systems -- Local and metropolitan area networks -- Part 1AR: Secure device identity," ISO/IEC/IEEE 8802-1AR:2014(E), vol., no., pp.1,82, Feb. 15 2014. It is available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6739984&isnumber=6739983
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trusted Platform Modules (TPMs) |
Trusted Platform Module (TPM) is a computer chip that can securely store artifacts used to authenticate a network or platform. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Interest is in TPMs is growing due to their potential for solving hard problems in security such as composability and cyber-physical system security and resilience. The works cited here are from 2014.
Akram, R.N.; Markantonakis, K.; Mayes, K., "Trusted Platform Module for Smart Cards," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814058 Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.
Keywords: data privacy; mobile handsets; near-field communication; smart cards; TEE ;Trusted Execution Environment; UCOM; UCTD; User Centric Tamper-Resistant Device; application provider; cross-device security; deployment model; embedded devices; global platform consumer centric model; multiapplication smart card initiative; near field communication-based mobile phone services; off-card trusted relationship; privacy preservation platforms; trusted platform module; trusted service manager; user centric smart card ownership model; Computational modeling; Computer architecture; Hardware; Mobile communication; Runtime; Security; Smart cards (ID#: 15-3713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814058&isnumber=6813963
Das, S.; Wei Zhang; Yang Liu, "Reconfigurable Dynamic Trusted Platform Module for Control Flow Checking," VLSI (ISVLSI), 2014 IEEE Computer Society Annual Symposium on, pp.166,171, 9-11 July 2014. doi: 10.1109/ISVLSI.2014.84 Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.
Keywords: field programmable gate arrays; formal verification; security of data; trusted computing; FPGA; buffer overflow; code reuse; control flow checking; dynamic TPM design; instruction set architecture; processor pipeline; reconfigurable dynamic trusted platform module; runtime attacks; stack smashing; Benchmark testing; Computer architecture; Field programmable gate arrays; Pipelines; Runtime; Security; Software; Control Flow Checking; Dynamic TPM; Reconfigurable Architecture; Runtime Security (ID#: 15-3714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903354&isnumber=6903314
Oberle, A.; Larbig, P.; Kuntze, N.; Rudolph, C., "Integrity based relationships and trustworthy communication between network participants," Communications (ICC), 2014 IEEE International Conference on,pp.610,615, 10-14 June 2014. doi: 10.1109/ICC.2014.6883386 Establishing trust relationships between network participants by having them prove their operating system's integrity via a Trusted Platform Module (TPM) provides interesting approaches for securing local networks at a higher level. In the introduced approach on OSI layer 2, attacks carried out by already authenticated and participating nodes (insider threats) can be detected and prevented. Forbidden activities and manipulations in hard- and software, such as executing unknown binaries, loading additional kernel modules or even inserting unauthorized USB devices, are detected and result in an autonomous reaction of each network participant. The provided trust establishment and authentication protocol operates independently from upper protocol layers and is optimized for resource constrained machines. Well known concepts of backbone architectures can maintain the chain of trust between different kinds of network types. Each endpoint, forwarding and processing unit monitors the internal network independently and reports misbehaviors autonomously to a central instance in or outside of the trusted network.
Keywords: computer network security; cryptographic protocols; trusted computing; OSI layer 2; authenticated node; authentication protocol; insider threat; integrity based relationship; network participants; operating system integrity; participating node; trust establishment; trusted platform module; trustworthy communication; Authentication; Encryption; Payloads; Protocols; Servers; Unicast; Cyber-physical systems; Security; authentication; industrial networks; integrity; protocol design; trust (ID#: 15-3715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883386&isnumber=6883277
Abd Aziz, N.; Udzir, N.I.; Mahmod, R., "Performance Analysis For Extended TLS With Mutual Attestation For Platform Integrity Assurance," Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on, pp.13,18, 4-7 June 2014. doi: 10.1109/CYBER.2014.6917428 A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.
Keywords: Web services; protocols; trusted computing; Apache Web server; Internet connectivity; Mozilla Firefox browser; SSL-TLS protocol; Web browsers; Web servers; Web service security; Web-based application ;client-server environment; endpoint platforms; extended TLS; mutual attestation mechanism; platform integrity assurance; remote attestation; trusted computing group; trustworthiness; Browsers; Principal component analysis; Protocols; Security; Web servers (ID#: 15-3716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917428&isnumber=6917419
Chen Chen, Himanshu Raj, Stefan Saroiu, Alec Wolman, “cTPM: a Cloud TPM For Cross-Device Trusted Applications,“ Proceeding s, NSDI'14 Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation, April, 2014,Pages 187-201, (no doi given) Current Trusted Platform Modules (TPMs) are illsuited for cross-device scenarios in trusted mobile applications because they hinder the seamless sharing of data across multiple devices. This paper presents cTPM, an extension of the TPM's design that adds an additional root key to the TPM and shares that root key with the cloud. As a result, the cloud can create and share TPM-protected keys and data across multiple devices owned by one user. Further, the additional key lets the cTPM allocate cloud-backed remote storage so that each TPM can benefit from a trusted real-time clock and high-performance, non-volatile storage. This paper shows that cTPM is practical, versatile, and easily applicable to trusted mobile applications. Our simple change to the TPM specification is viable because its fundamental concepts - a primary root key and off-chip, NV storage - are already found in the current specification, TPM 2.0. By avoiding a clean-slate redesign, we sidestep the difficult challenge of re-verifying the security properties of a new TPM design. We demonstrate cTPM's versatility with two case studies: extending Pasture with additional functionality, and reimplementing TrInc without the need for extra hardware.
Keywords: (not provided) (ID#: 15-3717)
URL: http://dl.acm.org/citation.cfm?id=2616448.2616466
Vijay Varadharajan, Udaya Tupakula, Counteracting Security Attacks In Virtual Machines In The Cloud Using Property Based Attestation, Journal of Network and Computer Applications, Volume 40, April, 2014, Pages 31-45. Doi: 10.1016/j.jnca.2013.08.002 Cloud computing technologies are receiving a great deal of attention. Furthermore most of the hardware devices such as the PCs and mobile phones are increasingly having a trusted component called Trusted Platform Module embedded in them, which helps to measure the state of the platform and hence reason about its trust. Recently attestation techniques such as binary attestation and property based attestation techniques have been proposed based on the TPM. In this paper, we propose a novel trust enhanced security model for cloud services that helps to detect and prevent security attacks in cloud infrastructures using trusted attestation techniques. We consider a cloud architecture where different services are hosted on virtualized systems on the cloud by multiple cloud customers (multi-tenants). We consider attacker model and various attack scenarios for such hosted services in the cloud. Our trust enhanced security model enables the cloud service provider to certify certain security properties of the tenant virtual machines and services running on them. These properties are then used to detect and minimise attacks between the cloud tenants running virtual machines on the infrastructure and its customers as well as increase the assurance of the tenant virtual machine transactions. If there is a variation in the behaviour of the tenant virtual machine from the certified properties, the model allows us to dynamically isolate the tenant virtual machine or even terminate the malicious services on a fine granular basis. The paper describes the design and implementation of the proposed model and discusses how it deals with the different attack scenarios. We also show that our model is beneficial for the cloud service providers, cloud customers running tenant virtual machines as well as the customers using the services provided by these tenant virtual machines.
Keywords: Cloud, Malware, Rootkits, TPM attestation, Trusted computing, Virtual machine monitors, Zero day attacks (ID#: 15-3718)
URL: http://www.sciencedirect.com/science/article/pii/S1084804513001768
Y. Seifi, S. Suriadi, E. Foo, C. Boyd, Security Properties Analysis In A TPM-Based Protocol, International Journal of Security and Networks, Volume 9 Issue 2, April 2014, Pages 85-103. Doi: 10.1504/IJSN.2014.060742 Security protocols are designed in order to provide security properties goals. They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy confidentiality, authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets CPN. This research analyses two security properties that are defined in a protocol that is based on trusted platform module TPM. The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.
Keywords: (not provided) (ID#: 15-3719)
URL: http://www.inderscience.com/offer.php?id=60742
Danan Thilakanathan, Shiping Chen, Surya Nepal, Rafael A. Calvo, Dongxi Liu, John Zic, CLOUD '14 Proceedings of the 2014 IEEE International Conference on Cloud Computing, June 2014, Pages 224-231. Doi: 10.1109/CLOUD.2014.39 The trend towards Cloud computing infrastructure has increased the need for new methods that allow data owners to share their data with others securely taking into account the needs of multiple stakeholders. The data owner should be able to share confidential data while delegating much of the burden of access control management to the Cloud and trusted enterprises. The lack of such methods to enhance privacy and security may hinder the growth of cloud computing. In particular, there is a growing need to better manage security keys of data shared in the Cloud. BYOD provides a first step to enabling secure and efficient key management, however, the data owner cannot guarantee that the data consumers device itself is secure. Furthermore, in current methods the data owner cannot revoke a particular data consumer or group efficiently. In this paper, we address these issues by incorporating a hardware-based Trusted Platform Module (TPM) mechanism called the Trusted Extension Device (TED) together with our security model and protocol to allow stronger privacy of data compared to software-based security protocols. We demonstrate the concept of using TED for stronger protection and management of cryptographic keys and how our secure data sharing protocol will allow a data owner (e.g., author) to securely store data via untrusted Cloud services. Our work prevents keys to be stolen by outsiders and/or dishonest authorized consumers, thus making it particularly attractive to be implemented in a real-world scenario.
Keywords: Cloud Computing, Security, Privacy, Data sharing, Access control, TPM, BYOD, Key management (ID#: 15-3720)
URL: http://dx.doi.org/10.1109/CLOUD.2014.39
Rommel García, Ignacio Algredo-Badillo, Miguel Morales-Sandoval, Claudia Feregrino-Uribe, René Cumplido, A Compact FPGA-Based Processor for the Secure Hash Algorithm SHA-256, Computers and Electrical Engineering, Volume 40 Issue 1, January, 2014, Pages 194-202. Doi: 10.1016/j.compeleceng.2013.11.014 This work reports an efficient and compact FPGA processor for the SHA-256 algorithm. The novel processor architecture is based on a custom datapath that exploits the reusing of modules, having as main component a 4-input Arithmetic-Logic Unit not previously reported. This ALU is designed as a result of studying the type of operations in the SHA algorithm, their execution sequence and the associated dataflow. The processor hardware architecture was modeled in VHDL and implemented in FPGAs. The results obtained from the implementation in a Virtex5 device demonstrate that the proposed design uses fewer resources achieving higher performance and efficiency, outperforming previous approaches in the literature focused on compact designs, saving around 60% FPGA slices with an increased throughput (Mbps) and efficiency (Mbps/Slice). The proposed SHA processor is well suited for applications like Wi-Fi, TMP (Trusted Mobile Platform), and MTM (Mobile Trusted Module), where the data transfer speed is around 50Mbps.
Keywords: (not provided) (ID#: 15-3721)
URL: http://www.sciencedirect.com/science/article/pii/S0045790613002966
Bryan Jeffery Parno, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers, (book) Association for Computing Machinery and Morgan & Claypool New York, NY, June 2014. ISBN = 978-1-62705-477-5 As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981]. In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. We support this premise over the course of the following chapters.
URL: http://dl.acm.org/citation.cfm?id=2611399
Akshay Dua, Nirupama Bulusu, Wu-Chang Feng, Wen Hu, Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems , ACM Transactions on Embedded Computing Systems (TECS), Volume 13 Issue 5s, September 2014, Article No. 154. Doi: 10.1145/2629338 Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.
Keywords: Trust, critical systems, crowd-sourced sensing, data integrity, embedded systems, mobile computing, security, (ID#: 15-3723)
URL: http://dl.acm.org/citation.cfm?doid=2660459.2629338
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Video Surveillance |
Video surveillance is a fast growing area of public security. With it has come policy issues related to privacy. Technical issues and opportunities have also arisen, including the potential to use advanced methods to provide positive identification, abnormal behaviors in crowds, intruder detection, and information fusion with other data. The research presented here came from multiple conferences and publications and was offered in 2014.
Xiaochun Cao; Na Liu; Ling Du; Chao Li, "Preserving Privacy For Video Surveillance Via Visual Cryptography," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.607,610, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889315 The video surveillance widely installed in public areas poses a significant threat to the privacy. This paper proposes a new privacy preserving method via the Generalized Random-Grid based Visual Cryptography Scheme (GRG-based VCS). We first separate the foreground from the background for each video frame. These foreground pixels contain the most important information that needs to be protected. Every foreground area is encrypted into two shares based on GRG-based VCS. One share is taken as the foreground, and the other one is embedded into another frame with random selection. The content of foreground can only be recovered when these two shares are got together. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.
Keywords: cryptography; data protection; video surveillance; GRG-based VCS; foreground pixels; generalized random-grid based visual cryptography scheme; performance evaluation; random selection; sensitive privacy information preservation method; video frame; video surveillance; Cameras; Cryptography; PSNR; Privacy; Video surveillance; Visualization; Random-Grid; Video surveillance; privacy protection; visual cryptography (ID#: 15-3584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889315&isnumber=6889177
Yoohwan Kim; Juyeon Jo; Shrestha, S., "A Server-Based Real-Time Privacy Protection Scheme Against Video Surveillance By Unmanned Aerial Systems," Unmanned Aircraft Systems (ICUAS), 2014 International Conference on, pp.684,691, 27-30 May 2014. doi: 10.1109/ICUAS.2014.6842313 Abstract: Unmanned Aerial Systems (UAS) have raised a great concern on privacy recently. A practical method to protect privacy is needed for adopting UAS in civilian airspace. This paper examines the privacy policies, filtering strategies, existing techniques, then proposes a novel method based on the encrypted video stream and the cloud-based privacy servers. In this scheme, all video surveillance images are initially encrypted, then delivered to a privacy server. The privacy server decrypts the video using the shared key with the camera, and filters the image according to the privacy policy specified for the surveyed region. The sanitized video is delivered to the surveillance operator or anyone on the Internet who is authorized. In a larger system composed of multiple cameras and multiple privacy servers, the keys can be distributed using Kerberos protocol. With this method the privacy policy can be changed on demand in real-time and there is no need for a costly on-board processing unit. By utilizing the cloud-based servers, advanced image processing algorithms and new filtering algorithms can be applied immediately without upgrading the camera software. This method is cost-efficient and promotes video sharing among multiple subscribers, thus it can spur wide adoption.
Keywords: Internet; data privacy; video coding; video surveillance; Internet; Kerberos protocol; UAS; camera software; civilian airspace; cloud-based privacy servers; cloud-based servers; encrypted video stream; filtering algorithms; filtering strategies; image processing algorithms; multiple privacy servers; on-board processing unit; privacy policy; sanitized video; server-based real-time privacy protection scheme; surveillance operator; unmanned aerial systems; video sharing; video surveillance images; Cameras; Cryptography; Filtering; Privacy; Servers; Streaming media; Surveillance; Key Distribution; Privacy; Unmanned Aerial Systems; Video Surveillance (ID#: 15-3585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842313&isnumber=6842225
Hassan, M.M.; Hossain, M.A.; Al-Qurishi, M., "Cloud-based Mobile IPTV Terminal for Video Surveillance," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.876, 880, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779086 Surveillance video streams monitoring is an important task that the surveillance operators usually carry out. The distribution of video surveillance facilities over multiple premises and the mobility of surveillance users requires that they are able to view surveillance video seamlessly from their mobile devices. In order to satisfy this requirement, we propose a cloud-based IPTV (Internet Protocol Television) solution that leverages the power of cloud infrastructure and the benefits of IPTV technology to seamlessly deliver surveillance video content on different client devices anytime and anywhere. The proposed mechanism also supports user-controlled frame rate adjustment of video streams and sharing of these streams with other users. In this paper, we describe the overall approach of this idea, address and identify key technical challenges for its practical implementation. In addition, initial experimental results were presented to justify the viability of the proposed cloud-based IPTV surveillance framework over the traditional IPTV surveillance approach.
Keywords: IPTV; cloud computing; mobile television; video surveillance Internet protocol television ;cloud-based mobile IPTV terminal; mobile devices; surveillance operators; surveillance video streams monitoring; video surveillance facilities distribution; Cameras ;IPTV; Mobile communication; Servers; Streaming media; Video surveillance; IPTV; Video surveillance; cloud computing; mobile terminal (ID#: 15-3586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779086&isnumber=6778899
Gorur, P.; Amrutur, B., "Skip Decision and Reference Frame Selection for Low-Complexity H.264/AVC Surveillance Video Coding," Circuits and Systems for Video Technology, IEEE Transactions on, vol.24, no.7, pp.1156,1169, July 2014. doi: 10.1109/TCSVT.2014.2319611 H.264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.
Keywords: Gaussian processes; cameras; data compression; distortion; motion compensation; video codecs; video coding; video surveillance; Gaussian mixture model;H.264/advanced video coding surveillance video encoders; bit rate savings; coding uncovered background regions; compression complexity; detection performance; distortion; floating point computations; foreground pixels; low-complexity H.264/AVC surveillance video coding; mixture model; motion-compensated prediction; multiple frames; recognition performance; reference frame selection; reference frame selection algorithm; skip decision; static camera surveillance video encoders; video sequence; video surveillance data sets; Cameras; Encoding; Motion detection; Motion segmentation; Streaming media; Surveillance; Video coding; Cache optimization; H.264/advanced video coding (AVC); motion detection; reference frame selection; skip decision; video surveillance (ID#: 15-3587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805578&isnumber=6846390
Xianguo Zhang; Tiejun Huang; Yonghong Tian; Wen Gao, "Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding," Image Processing, IEEE Transactions on, vol.23, no.2, pp.769,784, Feb. 2014. doi: 10.1109/TIP.2013.2294549 The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Keywords: data compression; video coding; video surveillance; AVC; BDP; BMAP method; BRP; MPEG-4 advanced video coding; background difference prediction; background pixels; background prediction efficiency; background reference prediction; background-modeling-based adaptive prediction method; encoding complexity; exponential growth; foreground coding performance; foreground prediction efficiency; foreground-background-hybrid blocks; high-efficiency surveillance video coding technology; surveillance video compression ratio; Complexity theory; Decoding; Encoding; Image coding; Object oriented modeling; Surveillance; video coding; Surveillance video; background difference; background modeling; background reference; block classification (ID#: 15-3588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680670&isnumber=6685907
Chun-Rong Huang; Chung, P.-C.J.; Di-Kai Yang; Hsing-Cheng Chen; Guan-Jie Huang, "Maximum a Posteriori Probability Estimation for Online Surveillance Video Synopsis," Circuits and Systems for Video Technology, IEEE Transactions on, vol.24, no.8, pp.1417,1429, Aug. 2014. doi: 10.1109/TCSVT.2014.2308603 To reduce human efforts in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation applying optimization on video tubes is very time consuming and infeasible for real-time online generation. This dilemma significantly reduces the feasibility of synopsis video generation in practical situations. To solve this problem, the synopsis video generation problem is formulated as a maximum a posteriori probability (MAP) estimation problem in this paper, where the positions and appearing frames of video objects are chronologically rearranged in real time without the need to know their complete trajectories. Moreover, a synopsis table is employed with MAP estimation to decide the temporal locations of the incoming foreground objects in the synopsis video without needing an optimization procedure. As a result, the computational complexity of the proposed video synopsis generation method can be significantly reduced. Furthermore, as it does not require prescreening the entire video, this approach can be applied on online streaming videos.
Keywords: maximum likelihood estimation; video signal processing; video streaming; video surveillance; MAP estimation problem; computational complexity reduction; human effort reduction; long surveillance video browsing; maximum-a-posteriori probability estimation problem; online streaming videos; online surveillance video synopsis; synopsis table; synopsis video generation problem; video summarization; video tubes; Estimation; Indexes; Optimization; Predictive models; Real-time systems; Streaming media; Surveillance; Maximum a posteriori (MAP) estimation; video summarization; video surveillance; video synopsis (ID#: 15-3589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748870&isnumber=6869080
Hong Jiang; Songqing Zhao; Zuowei Shen; Wei Deng; Wilford, P.A.; Haimi-Cohen, R., "Surveillance Video Analysis Using Compressive Sensing With Low Latency," Bell Labs Technical Journal , vol.18, no.4, pp. 63, 74, March 2014. doi: 10.1002/bltj.21646 We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.
Keywords: compressed sensing; image motion analysis; image segmentation; video surveillance; wavelet transforms; LRSD; background segmentation; compressive sensing; low latency method; low rank and sparse decomposition; surveillance video analysis; video frames; wavelet frame domain; Matrix decompoistion; Object recognition; Sparse decomposition; Sparse matrices; Streaming media; Surveillance; Video communication; Wavelet domain (ID#: 15-3590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6770348&isnumber=6770341
Rasheed, N.; Khan, S.A.; Khalid, A., "Tracking and Abnormal Behavior Detection in Video Surveillance Using Optical Flow and Neural Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.61,66, 13-16 May 2014. doi: 10.1109/WAINA.2014.18 An abnormal behavior detection algorithm for surveillance is required to correctly identify the targets as being in a normal or chaotic movement. A model is developed here for this purpose. The uniqueness of this algorithm is the use of foreground detection with Gaussian mixture (FGMM) model before passing the video frames to optical flow model using Lucas-Kanade approach. Information of horizontal and vertical displacements and directions associated with each pixel for object of interest is extracted. These features are then fed to feed forward neural network for classification and simulation. The study is being conducted on the real time videos and some synthesized videos. Accuracy of method has been calculated by using the performance parameters for Neural Networks. In comparison of plain optical flow with this model, improved results have been obtained without noise. Classes are correctly identified with an overall performance equal to 3.4e-02 with & error percentage of 2.5.
Keywords: Gaussian processes; feature selection; feedforward neural nets; image sequences; mixture models; object detection; video surveillance; FGMM model; Lucas-Kanade approach; abnormal behavior detection; chaotic movement; feed forward neural network; foreground detection with Gaussian mixture model; neural networks; normal movement; optical flow; real time videos; synthesized videos; targets identification; video frames; video surveillance; Adaptive optics; Computer vision; Image motion analysis; Neural networks; Optical computing; Optical imaging; Streaming media; Foreground Detection; Gaussian Mixture Models; Neural Network; Optical Flow; Video Surveillance (ID#: 15-3591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844614&isnumber=6844560
Hammoud, R.I.; Sahin, C.S.; Blasch, E.P.; Rhodes, B.J., "Multi-source Multi-modal Activity Recognition in Aerial Video Surveillance," Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp.237,244, 23-28 June 2014. doi: 10.1109/CVPRW.2014.44 Recognizing activities in wide aerial/overhead imagery remains a challenging problem due in part to low-resolution video and cluttered scenes with a large number of moving objects. In the context of this research, we deal with two un-synchronized data sources collected in real-world operating scenarios: full-motion videos (FMV) and analyst call-outs (ACO) in the form of chat messages (voice-to-text) made by a human watching the streamed FMV from an aerial platform. We present a multi-source multi-modal activity/event recognition system for surveillance applications, consisting of: (1) detecting and tracking multiple dynamic targets from a moving platform, (2) representing FMV target tracks and chat messages as graphs of attributes, (3) associating FMV tracks and chat messages using a probabilistic graph-based matching approach, and (4) detecting spatial-temporal activity boundaries. We also present an activity pattern learning framework which uses the multi-source associated data as training to index a large archive of FMV videos. Finally, we describe a multi-intelligence user interface for querying an index of activities of interest (AOIs) by movement type and geo-location, and for playing-back a summary of associated text (ACO) and activity video segments of targets-of-interest (TOIs) (in both pixel and geo-coordinates). Such tools help the end-user to quickly search, browse, and prepare mission reports from multi-source data.
Keywords: image matching; image motion analysis; image representation; indexing; learning (artificial intelligence);object detection; query processing; target tracking; user interfaces; video streaming; video surveillance; ACO; FMV streaming; FMV target track representation; FMV videos; activities of interest; activity pattern learning framework; activity video segments; aerial imagery; aerial video surveillance; analyst call-outs; associated text; full-motion video; geolocation; index query; multi-intelligence user interface; multiple dynamic target detection; multiple dynamic target tracking; multisource associated data; multisource multimodal activity recognition; multisource multimodal event recognition; overhead imagery; probabilistic graph-based matching approach; spatial-temporal activity boundary detection; targets-of-interest; unsynchronized data sources; voice-to-text chat messages; Pattern recognition; Radar tracking; Semantics; Streaming media; Target tracking; Vehicles; FMV exploitation; MINER; activity recognition; chat and video fusion; event recognition; fusion; graph matching; graph representation; surveillance (ID#: 15-3592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909989&isnumber=6909944
Woon Cho; Abidi, M.A.; Kyungwon Jeong; Nahyun Kim; Seungwon Lee; Joonki Paik; Gwang-Gook Lee, "Object Retrieval Using Scene Normalized Human Model For Video Surveillance System," Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on, pp.1,2, 22-25 June 2014 doi: 10.1109/ISCE.2014.6884439 This paper presents a human model-based feature extraction method for a video surveillance retrieval system. The proposed method extracts, from a normalized scene, object features such as height, speed, and representative color using a simple human model based on multiple-ellipse. Experimental results show that the proposed system can effectively track moving routes of people such as a missing child, an absconder, and a suspect after events.
Keywords: feature extraction; image retrieval; object tracking; feature extraction; multiple ellipse human model; object retrieval; scene normalized human model; video surveillance retrieval system; Cameras; Databases; Feature extraction ;Image color analysis; Shape; Video surveillance; human model; retrieval system; scene calibration; surveillance system (ID#: 15-3593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884439&isnumber=6884278
Ma Juan; Hu Rongchun; Li Jian, "A Fast Human Detection Algorithm Of Video Surveillance In Emergencies," Control and Decision Conference (2014 CCDC), The 26th Chinese, vol., no., pp.1500,1504, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852404 This paper propose a fast human detection algorithm of video surveillance in emergencies. Firstly through the background subtraction based on the single Guassian model and frame subtraction, we get the target mask which is optimized by Gaussian filter and dilation. Then the interest points of head is obtained from figures with target mask and edge detection. Finally according to detecting these points we can track the head and count the number of people with the frequency of moving target at the same place. Simulation results show that the algorithm can detect the moving object quickly and accurately.
Keywords: Gaussian processes; edge detection; object detection; video surveillance; Gaussian filter; background subtraction; dilation; edge detection; emergencies; frame subtraction; human detection algorithm; moving target; single Guassian model; target mask; video surveillance; Conferences; Detection algorithms; Educational institutions; Electronic mail; Estimation; IEEE Computer Society; Image edge detection; background subtraction; edge tracking of head; frame subtraction; target mask (ID#: 15-3594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852404&isnumber=6852105
Harish, Palagati; Subhashini, R.; Priya, K., "Intruder Detection By Extracting Semantic Content From Surveillance Videos," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp.1,5, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6922469 Many surveillance cameras are using everywhere, the videos or images captured by these cameras are still dumped but they are not processed. Many methods are proposed for tracking and detecting the objects in the videos but we need the meaningful content called semantic content from these videos. Detecting Human activity recognition is quite complex. The proposed method called Semantic Content Extraction (SCE) from videos is used to identify the objects and the events present in the video. This model provides useful methodology for intruder detecting systems which provides the behavior and the activities performed by the intruder. Construction of ontology enhances the spatial and temporal relations between the objects or features extracted. Thus proposed system provides a best way for detecting the intruders, thieves and malpractices happening around us.
Keywords: Cameras; Feature extraction; Ontologies; Semantics; Video surveillance; Videos; Human activity recognition; Ontology; Semantic content; Spatial and Temporal Relations (ID#: 15-3595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6922469&isnumber=6920919
Wang, S.; Orwell, J.; Hunter, G., "Evaluation of Bayesian and Dempster-Shafer Approaches to Fusion of Video Surveillance Information," Information Fusion (FUSION), 2014 17th International Conference on, pp. 1, 7, 7-10 July 2014. (no doi provided) This paper presents the application of fusion methods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach.
Keywords: Accuracy; Bayes methods; Color; Mathematical model; Shape; Uncertainty; Vehicles; Bayesian; Dempster-Shafer; evaluation; fusion ;vehicle (ID#: 15-3596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916172&isnumber=6915967
Yueguo Zhang; Lili Dong; Shenghong Li; Jianhua Li, "Abnormal Crowd Behavior Detection Using Interest Points," Broadband Multimedia Systems and Broadcasting (BMSB), 2014 IEEE International Symposium on, pp.1,4, 25-27 June 2014. doi: 10.1109/BMSB.2014.6873527 Abnormal crowd behavior detection is an important research issue in video processing and computer vision. In this paper we introduce a novel method to detect abnormal crowd behaviors in video surveillance based on interest points. A complex network-based algorithm is used to detect interest points and extract the global texture features in scenarios. The performance of the proposed method is evaluated on publicly available datasets. We present a detailed analysis of the characteristics of the crowd behavior in different density crowd scenes. The analysis of crowd behavior features and simulation results are also demonstrated to illustrate the effectiveness of our proposed method.
Keywords: behavioural sciences computing; complex networks; computer vision; feature extraction ;image texture; object detection; video signal processing; video surveillance; abnormal crowd behavior detection; complex network-based algorithm; computer vision; crowd behavior feature analysis; global texture feature extraction; interest point detection; video processing; video surveillance; Broadband communication; Broadcasting; Complex networks; Computer vision; Feature extraction; Multimedia systems; Video surveillance; Crowd Behavior; Video Surveillance; Video processing (ID#: 15-3597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873527&isnumber=6873457
Lu Wang; Yung, N.H.C.; Lisheng Xu, "Multiple-Human Tracking by Iterative Data Association and Detection Update," Intelligent Transportation Systems, IEEE Transactions on, vol. 15, no. 5, pp.1886,1899, Oct. 2014. doi: 10.1109/TITS.2014.2303196 Multiple-object tracking is an important task in automated video surveillance. In this paper, we present a multiple-human-tracking approach that takes the single-frame human detection results as input and associates them to form trajectories while improving the original detection results by making use of reliable temporal information in a closed-loop manner. It works by first forming tracklets, from which reliable temporal information is extracted, and then refining the detection responses inside the tracklets, which also improves the accuracy of tracklets' quantities. After this, local conservative tracklet association is performed and reliable temporal information is propagated across tracklets so that more detection responses can be refined. The global tracklet association is done last to resolve association ambiguities. Experimental results show that the proposed approach improves both the association and detection results. Comparison with several state-of-the-art approaches demonstrates the effectiveness of the proposed approach.
Keywords: feature extraction; intelligent transportation systems; iterative methods ;object tracking; sensor fusion; video surveillance; automated video surveillance; detection responses; human detection results; intelligent transportation systems; iterative data association; multiple-human tracking; temporal information extraction ;tracklet association; Accuracy; Computational modeling; Data mining; Reliability; Solid modeling; Tracking; Trajectory; Data association; detection update; multiple-human tracking; video surveillance (ID#: 15-3598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6750747&isnumber=6910343
Shuai Yi; Xiaogang Wang, "Profiling Stationary Crowd Groups," Multimedia and Expo (ICME), 2014 IEEE International Conference on, pp. 1, 6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890138 Detecting stationary crowd groups and analyzing their behaviors have important applications in crowd video surveillance, but have rarely been studied. The contributions of this paper are in two aspects. First, a stationary crowd detection algorithm is proposed to estimate the stationary time of foreground pixels. It employs spatial-temporal filtering and motion filtering in order to be robust to noise caused by occlusions and crowd clutters. Second, in order to characterize the emergence and dispersal processes of stationary crowds and their behaviors during the stationary periods, three attributes are proposed for quantitative analysis. These attributes are recognized with a set of proposed crowd descriptors which extract visual features from the results of stationary crowd detection. The effectiveness of the proposed algorithms is shown through experiments on a benchmark dataset.
Keywords: feature extraction; filtering theory; image motion analysis; object detection; video signal processing; video surveillance; crowd descriptors; crowd video surveillance; foreground pixel; motion filtering; quantitative analysis; spatial-temporal filtering; stationary crowd detection algorithm; stationary crowd group detection; stationary crowd groups profiling; visual feature extraction;Color;Estimation;Filtering;Indexes;Noise;Tracking;Trajectory;Stationary crowd detection; crowd video surveillance; stationary crowd analysis (ID#: 15-3599)<
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890138&isnumber=6890121
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Wireless Mesh Network Security |
With more than 70 protocols vying for preeminence over wireless mesh networks, the security problem is magnified. The research cited here was presented in 2014 and covers smart grid, specific protocols, 4-way handshaking, and fuzzy protocols.
Saavedra Benitez, Y.I.; Ben-Othman, J.; Claude, J.-P., "Performance Evaluation of Security Mechanisms in RAOLSR Protocol for Wireless Mesh Networks," Communications (ICC), 2014 IEEE International Conference on, pp. 1808, 1812, 10-14 June 2014. doi: 10.1109/ICC.2014.6883585 In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilization of the IBE-based RAOLSR provides a greater level of security with light overhead.
Keywords: cryptography; routing protocols; telecommunication control; telecommunication network topology; wireless mesh networks; ECDSA-RAOLSR protocols; IBE-RAOLSR protocols; WMN; elliptic curve digital signature algorithm; hello messages; identity based encryption; network simulator-3 simulations; radio aware optimized link state routing; routing protocols; security mechanisms ;topology control; wireless mesh networks; Delays; Digital signatures; IEEE 802.11 Standards; Routing; Routing protocols; IBE; Identity Based Encryption; Radio Aware Optimized Link State Routing; Routing Protocol; Security; Wireless Mesh Networks (ID#: 15-3695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883585&isnumber=6883277
Tsado, Y.; Lund, D.; Gamage, K., "Resilient Wireless Communication Networking For Smart Grid BAN," Energy Conference (ENERGYCON), 2014 IEEE International, pp.846, 851, 13-16 May 2014. doi: 10.1109/ENERGYCON.2014.6850524 The concept of Smart grid technology sets greater demands for reliability and resilience on communications infrastructure. Wireless communication is a promising alternative for distribution level, Home Area Network (HAN), smart metering and even the backbone networks that connect smart grid applications to control centers. In this paper, the reliability and resilience of smart grid communication network is analyzed using the IEEE 802.11 communication technology in both infrastructure single hop and mesh multiple-hop topologies for smart meters in a Building Area Network (BAN). Performance of end to end delay and Round Trip Time (RTT) of an infrastructure mode smart meter network for Demand Response (DR) function is presented. Hybrid deployment of these network topologies is also suggested to provide resilience and redundancy in the network during network failure or when security of the network is circumvented. This recommendation can also be deployed in other areas of the grid where wireless technologies are used. DR communication from consumer premises is used to show the performance of an infrastructure mode smart metering network.
Keywords: home automation; home networks; redundancy; sensor placement; smart meters; smart power grids; telecommunication network reliability; telecommunication network topology; telecommunication security; wireless LAN; DR communication; IEEE 802.11 communication technology; RTT; backbone networks; building area network; control center; demand response function; distribution level; end to end delay; home area network; hybrid deployment; infrastructure mode smart meter network; infrastructure single hop topology; mesh multiple hop topology; network failure; network reliability; network security; redundancy; resilient wireless communication networking; round trip time; smart grid BAN; wireless technology; IEEE 802.11 Standards; Network topology; Resilience; Smart grids; Smart meters; Wireless communication; Infrastructure mode; Multi-hop mesh network ;Resilience; Single-hop network (ID#: 15-3696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850524&isnumber=6850389
Ghatak, Sumitro; Bose, Sagar; Roy, Siuli, "Intelligent Wall Mounted Wireless Fencing System Using Wireless Sensor Actuator Network," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,5, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921795 This paper presents the relative merits of IR and microwave sensor technology and their combination with wireless camera for the development of a wall mounted wireless intrusion detection system and explain the phases by which the intrusion information are collected and sent to the central control station using wireless mesh network for analysis and processing the collected data. These days every protected zone is facing numerous security threats like trespassing or damaging of important equipment and a lot more. Unwanted intrusion has turned out to be a growing problem which has paved the way for a newer technology which detects intrusion accurately. Almost all organizations have their own conventional arrangement of protecting their zones by constructing high wall, wire fencing, power fencing or employing guard for manual observation. In case of large areas, manually observing the perimeter is not a viable option. To solve this type of problem we have developed a wall-mounted wireless fencing system. In this project I took the responsibility of studying how the different units could be collaborated and how the data collected from them could be further processed with the help of software, which was developed by me. The Intrusion detection system constitutes an important field of application for IR and microwave based wireless sensor network. A state of the art wall-mounted wireless intrusion detection system will detect intrusion automatically, through multi-level detection mechanism (IR, microwave, active RFID & camera) and will generate multi-level alert (buzzer, images, segment illumination, SMS, E-Mail) to notify security officers, owners and also illuminate the particular segment where the intrusion has happened. This system will enable the authority to quickly handle the emergency through identification of the area of incident at once and to take action quickly. IR based perimeter protection is a proven technology. However IR-based intrusion detection -system is not a full-proof solution since (1) IR may fail in foggy or dusty weather condition & hence it may generate false alarm. Therefore we amalgamate this technology with Microwave based intrusion detection which can work satisfactorily in foggy weather. Also another significant arena of our proposed system is the Camera-based intrusion detection. Some industries require this feature to capture the snap-shots of the affected location instantly as the intrusion happens. The Intrusion information data are transmitted wirelessly to the control station via multi hop routing (using active RFID or IEEE 802.15.4 protocol). The Control station will receive intrusion information at real time and analyze the data with the help of the Intrusion software. It then sends SMS to the predefined numbers of the respective authority through GSM modem attached with the control station engine.
Keywords: Communication system security; Intrusion detection; Monitoring; Software; Wireless communication; Wireless sensor networks; IEEE 802.15.4;IR Transceiver Module; Wireless Sensor Network (ID#: 15-3697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921795&isnumber=6921705
Junguo Liao; Mingyan Wang, "A New Dynamic Updating Key Strategy Based On EMSA In Wireless Mesh Networks," Information and Communications Technologies (ICT 2014), 2014 International Conference on, pp.1,5, 15-17 May 2014. doi: 10.1049/cp.2014.0635 In the security protocols of Efficient Mesh Security Association(EMSA), the key updating strategy is an effective method to ensure the security of communication. For the existing strategy of periodic automatic key updating, the PTK (Pairwise Transit Key) is updated through the complex 4-way handshake to produce each time. Once the update frequency of the PTK is faster, it will have a greater impact on throughput and delay of the network. On this basis, we propose a new strategy of dynamic key updating to ensure the safety and performance of wireless mesh networks. In the new strategy, mesh point (MP) and mesh authenticator (MA) negotiate a random function at the initial certification, and use the PTK which is generated by the 4-way handshake as the initial seed. When the PTK updating cycle comes, both sides generate the new keys using the random function, which do not have to generate a new PTK by complex 4-way handshake. The analysis of performance compared with existing strategies showed that the dynamic key updating strategy proposed in this paper have a larger increase in delay and throughput of the network.
Keywords: EMSA; MESH network; key update; security protocol (ID#: 15-3698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913688&isnumber=6913610
Mor, V.; Kumar, H., "Energy Efficient Techniques in Wireless Mesh Network," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp.1, 6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799561 Wireless Mesh Network (WMN) is a promising wireless network architecture having potential of last few miles connectivity. There has been considerable research work carried out on various issues like design, performance, security etc. in WMN. Due to increasing interest in WMN and use of smart devices with bandwidth hungry applications, WMN must be designed with objective of energy efficient communication. Goal of this paper is to summarize importance of energy efficiency in WMN. Various techniques to bring energy efficient solutions have also been reviewed.
Keywords: energy conservation; wireless mesh networks; WMN; bandwidth hungry applications; energy efficient techniques; smart devices; wireless mesh network; wireless network architecture; Energy efficiency; IEEE 802.11 Standards; Logic gates; Routing; Throughput; Wireless communication; Wireless mesh networks; energy aware techniques; energy efficient network; evolution (ID#: 15-3699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799561&isnumber=6799496
Szott, S., "Selfish Insider Attacks in IEEE 802.11s Wireless Mesh Networks," Communications Magazine, IEEE, vol.52, no.6, pp.227, 233, June 2014. doi: 10.1109/MCOM.2014.6829968 The IEEE 802.11s amendment for wireless mesh networks does not provide incentives for stations to cooperate and is particularly vulnerable to selfish insider attacks in which a legitimate network participant hopes to increase its QoS at the expense of others. In this tutorial we describe various attacks that can be executed against 802.11s networks and also analyze existing attacks and identify new ones. We also discuss possible countermeasures and detection methods and attempt to quantify the threat of the attacks to determine which of the 802.11s vulnerabilities need to be secured with the highest priority.
Keywords: telecommunication security; wireless LAN; wireless mesh networks; IEEE 802.11s wireless mesh networks; selfish insider attacks; Ad hoc networks; IEEE 802.11 Standards; Logic gates; Protocols; Quality of service; Routing; Wireless mesh networks (ID#: 15-3700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6829968&isnumber=6829933
El Masri, A.; Sardouk, A.; Khoukhi, L.; Merghem-Boulahia, L.; Gaiti, D., "Multimedia Support in Wireless Mesh Networks Using Interval Type-2 Fuzzy Logic System," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814034 Wireless mesh networks (WMNs) are attracting more and more real time applications. This kind of applications is constrained in terms of Quality of Service (QoS). Existing works in this area are mostly designed for mobile ad hoc networks, which, unlike WMNs, are mainly sensitive to energy and mobility. However, WMNs have their specific characteristics (e.g. static routers and heavy traffic load), which require dedicated QoS protocols. This paper proposes a novel traffic regulation scheme for multimedia support in WMNs. The proposed scheme aims to regulate the traffic sending rate according to the network state, based on the buffer evolution at mesh routers and on the priority of each traffic type. By monitoring the buffer evolution at mesh routers, our scheme is able to predict possible congestion, or QoS violation, early enough before their occurrence; each flow is then regulated according to its priority and to its QoS requirements. The idea behind the proposed scheme is to maintain lightly loaded buffers in order to minimize the queuing delays, as well as, to avoid congestion. Moreover, the regulation process is made smoothly in order to ensure the continuity of real time and interactive services. We use the interval type-2 fuzzy logic system (IT2 FLS), known by its adequacy to uncertain environments, to make suitable regulation decisions. The performance of our scheme is proved through extensive simulations in different network and traffic load scales.
Keywords: fuzzy control; protocols; quality of service; queueing theory; telecommunication congestion control; telecommunication traffic; wireless mesh networks; QoS requirements; QoS violation; buffer evolution; dedicated QoS protocols; heavy traffic load; interval type-2 fuzzy logic system; lightly loaded buffers; mesh routers; mobile ad hoc networks; multimedia support; network state; quality of service; queuing delays; regulation process; static routers; traffic load scale; traffic regulation scheme; traffic sending rate; traffic type; wireless mesh networks; Ad hoc networks; Delays ;Load management; Quality of service; Real-time systems; Throughput; Wireless communication (ID#: 15-3701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814034&isnumber=6813963
Bin Hu; Gharavi, H., "Smart Grid Mesh Network Security Using Dynamic Key Distribution With Merkle Tree 4-Way Handshaking," Smart Grid, IEEE Transactions on, vol.5, no.2, pp.550,558, March 2014. doi: 10.1109/TSG.2013.2277963 Distributed mesh sensor networks provide cost-effective communications for deployment in various smart grid domains, such as home area networks (HAN), neighborhood area networks (NAN), and substation/plant-generation local area networks. This paper introduces a dynamically updating key distribution strategy to enhance mesh network security against cyber attack. The scheme has been applied to two security protocols known as simultaneous authentication of equals (SAE) and efficient mesh security association (EMSA). Since both protocols utilize 4-way handshaking, we propose a Merkle-tree based handshaking scheme, which is capable of improving the resiliency of the network in a situation where an intruder carries a denial of service attack. Finally, by developing a denial of service attack model, we can then evaluate the security of the proposed schemes against cyber attack, as well as network performance in terms of delay and overhead.
Keywords: computer network performance evaluation; computer network security; cryptographic protocols; home networks; smart power grids; substations; trees (mathematics);wireless LAN; wireless mesh networks; wireless sensor networks; EMSA; HAN;IEEE 802.11s;Merkle tree 4-way handshaking scheme; NAN; SAE; WLAN; cost-effective communications; cyber attack; denial-of-service attack model; distributed mesh sensor networks; dynamic key distribution strategy updating; efficient mesh security association; home area networks; neighborhood area networks; network performance; network resiliency improvement; plant-generation local area networks; security protocols; simultaneous authentication-of-equals; smart grid mesh network security enhancement; substation local area networks; wireless local area networks; Authentication; Computer crime; Logic gates; Mesh networks; Protocols; Smart grids; EMSA; IEEE 802.11s;SAE;security attacks; security protocols; smart grid; wireless mesh networks (ID#: 15-3702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6599007&isnumber=6740878
Ping Yi; Ting Zhu; Qingquan Zhang; Yue Wu; Jianhua Li, "A Denial Of Service Attack In Advanced Metering Infrastructure Network," Communications (ICC), 2014 IEEE International Conference on, pp.1029, 1034, 10-14 June 2014. doi: 10.1109/ICC.2014.6883456 Advanced Metering Infrastructure (AMI) is the core component in a smart grid that exhibits a highly complex network configuration. AMI shares information about consumption, outages, and electricity rates reliably and efficiently by bidirectional communication between smart meters and utilities. However, the numerous smart meters being connected through mesh networks open new opportunities for attackers to interfere with communications and compromise utilities assets or steal customers’ private information. In this paper, we present a new DoS attack, called puppet attack, which can result in denial of service in AMI network. The intruder can select any normal node as a puppet node and send attack packets to this puppet node. When the puppet node receives these attack packets, this node will be controlled by the attacker and flood more packets so as to exhaust the network communication bandwidth and node energy. Simulation results show that puppet attack is a serious and packet deliver rate goes down to 20%-10%.
Keywords: power engineering computing; power system measurement; radio telemetry; security of data; smart meters; smart power grids; wireless mesh networks; DoS attack; advanced metering infrastructure network; denial of service attack; mesh network; puppet attack; smart meter; smart power grid; Computer crime; Electricity; Floods; Routing protocols; Smart meters; Wireless mesh networks (ID#: 15-3703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883456&isnumber=6883277
do Carmo, Rodrigo; Hoffmann, Justus; Willert, Volker; Hollick, Matthias, "Making Active-Probing-Based Network Intrusion Detection in Wireless Multihop Networks practical: A Bayesian Inference Approach To Probe Selection," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, pp.345,353, 8-11 Sept. 2014. doi: 10.1109/LCN.2014.6925790 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. The distributed nature of the network makes centralized intrusion detection difficult, while resource constraints of the nodes and the characteristics of the wireless medium often render decentralized, node-based approaches impractical. We demonstrate that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. The key contribution of this paper is to optimize the active probing process: we introduce a general Bayesian model and design a probe selection algorithm that reduces the number of probes while maximizing the insights gathered by the AP-NIDS. We validate our model by means of testbed experimentation. We integrate it to our open source AP-NIDS DogoIDS and run it in an indoor wireless mesh testbed utilizing the IEEE 802.11s protocol. For the example of a selective packet dropping attack, we develop the detection states for our Bayes model, and show its feasibility. We demonstrate that our approach does not need to execute the complete set of probes, yet we obtain good detection rates.
Keywords: Bayes methods; Equations; Intrusion detection; Probes; Spread spectrum communication; Testing; Wireless communication; Bayes inference; Intrusion Detection; Security; Wireless Multihop Networks (ID#: 15-3704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925790&isnumber=6925725
Soderi, S.; Dainelli, G.; Iinatti, J.; Hamalainen, M., "Signal Fingerprinting In Cognitive Wireless Networks," Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on, pp.266,270, 2-4 June 2014. Future wireless communications are made up of different wireless technologies. In such a scenario, cognitive and cooperative principles create a promising framework for the interaction of these systems. The opportunistic behavior of cognitive radio (CR) provides an efficient use of radio spectrum and makes wireless network setup easier. However more and more frequently, CR features are exploited by malicious attacks, e.g., denial-of-service (DoS). This paper introduces active radio frequency fingerprinting (RFF) with double application scenario. CRs could encapsulate common-control-channel (CCC) information in an existing channel using active RFF and avoiding any additional or dedicated link. On the other hand, a node inside a network could use the same technique to exchange a public key during the setup of secure communication. Results indicate how the active RFF aims to a valuable technique for cognitive radio manager (CRM) framework facilitating data exchange between CRs without any dedicated channel or additional radio resource.
Keywords: cognitive radio; cryptographic protocols; public key cryptography; telecommunication security; telecommunication signalling; wireless mesh networks; CRM; DoS; RFF; active radiofrequency fingerprinting; cognitive radio manager framework; cognitive wireless networks; common-control-channel information; denial-of-service attacks; malicious attacks; public key; signal fingerprinting; Amplitude shift keying; Demodulation; Protocols; Security; Signal to noise ratio; Spread spectrum communication; Wireless communication; Cognitive; Fingerprinting; Security; Wireless (ID#: 15-3705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849697&isnumber=6849647
Lichtblau, B.; Dittrich, A., "Probabilistic Breadth-First Search - A Method for Evaluation of Network-Wide Broadcast Protocols," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814046 In Wireless Mesh Networks (WMNs), Network-Wide Broadcasts (NWBs) are a fundamental operation, required by routing and other mechanisms that distribute information to all nodes in the network. However, due to the characteristics of wireless communication, NWBs are generally problematic. Optimizing them thus is a prime target when improving the overall performance and dependability of WMNs. Most existing optimizations neglect the real nature of WMNs and are based on simple graph models, which provide optimistic assumptions of NWB dissemination. On the other hand, models that fully consider the complex propagation characteristics of NWBs quickly become unsolvable due to their complexity. In this paper, we present the Monte Carlo method Probabilistic Breadth-First Search (PBFS) to approximate the reachability of NWB protocols. PBFS simulates individual NWBs on graphs with probabilistic edge weights, which reflect link qualities of individual wireless links in the WMN, and estimates reachability over a configurable number of simulated runs. This approach is not only more efficient than existing ones, but further provides additional information, such as the distribution of path lengths. Furthermore, it is easily extensible to NWB schemes other than flooding. The applicability of PBFS is validated both theoretically and empirically, in the latter by comparing reachability as calculated by PBFS and measured in a real-world WMN. Validation shows that PBFS quickly converges to the theoretically correct value and approximates the behavior of real-life testbeds very well. The feasibility of PBFS to support research on NWB optimizations or higher level protocols that employ NWBs is demonstrated in two use cases.
Keywords: Monte Carlo methods; graph theory; routing protocols; search problems; wireless mesh networks; Monte Carlo method; NWB dissemination; NWB optimizations; NWB protocols; WMN; complex propagation characteristics; link qualities; network-wide broadcast protocols; network-wide broadcasts; path lengths; probabilistic breadth-first search; probabilistic edge weights; simple graph models; wireless communication; wireless links; wireless mesh networks; Approximation methods; Complexity theory; Mathematical model; Optimization; Probabilistic logic; Protocols; Wireless communication}, (ID#: 15-3706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814046&isnumber=6813963
do Carmo, R.; Hollick, M., "Analyzing Active Probing For Practical Intrusion Detection in Wireless Multihop Networks," Wireless On-demand Network Systems and Services (WONS), 2014 11th Annual Conference on, pp.77,80, 2-4 April 2014. doi: 10.1109/WONS.2014.6814725 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it.
Keywords: frequency hop communication; security of data; wireless mesh networks; active-probing-based network intrusion detection system; wireless mesh network; wireless multihop networks; Ad hoc networks; Communication system security; Intrusion detection; Routing protocols; Testing; Wireless communication; Wireless sensor networks (ID#: 15-3708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814725&isnumber=6814711
Bhatia, R.K.; Bodade, V., "Defining The Framework For Wireless-AMI Security In Smart Grid," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp.1, 5, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921383 In smart grid, critical data like monitoring data, usage data, state estimation, billing data, etc. are regularly being talked among its elements. So, security of such a system, if violated, results in massive losses and damages. By compromising with security aspect of such a system is as good as committing suicide. Thus in this paper, we have proposed security mechanism in Advanced Metering Infrastructure of smart grid, formed as Mesh-Zigbee topology. This security mechanism involves PKI based Digital certificate Authentication and Intrusion detection system to protect the AMI from internal and external security attack.<
Keywords: Zigbee; computer network security; metering; power engineering computing; power system protection; public key cryptography; smart power grids; wireless mesh networks; PKI based digital certificate authentication; external security attack; internal security attack; intrusion detection system; public key infrastructure; smart grid advanced metering infrastructure; wireless AMI security; wireless mesh Zigbee network topology; Authentication; Intrusion detection; Smart grids; Smart meters; Wireless communication; Zigbee; AMI (Advanced Metering Infrastructure); PKI; Security; WMN(Wireless Mesh Network) (ID#: 15-3709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921383&isnumber=6920919
de Alwis, Chamitha; Arachchi, H.Kodikara; Fernando, Anil; Pourazad, Mahsa, "Content And Network-Aware Multicast Over Wireless Networks," Heterogeneous Networking for Quality, Reliability, Security and Robustness (QShine), 2014 10th International Conference on, pp.122,128, 18-20 Aug. 2014. doi: 10.1109/QSHINE.2014.6928670 This paper proposes content and network-aware redundancy allocation algorithms for channel coding and network coding to optimally deliver data and video multicast services over error prone wireless mesh networks. Each network node allocates redundancies for channel coding and network coding taking in to account the content properties, channel bandwidth and channel status to improve the end-to-end performance of data and video multicast applications. For data multicast applications, redundancies are allocated at each network node in such a way that the total amount of redundant bits transmitted is minimised. As for video multicast applications, redundancies are allocated considering the priority of video packets such that the probability of delivering high priority video packets is increased. This not only ensures the continuous playback of a video but also increases the received video quality. Simulation results for bandwidth sensitive data multicast applications exhibit up to 10× reduction of the required amount of redundant bits compared to reference schemes to achieve a 100% packet delivery ratio. Similarly, for delay sensitive video multicast applications, simulation results exhibit up to 3.5dB PSNR gains in the received video quality.
Keywords: Bandwidth; Channel coding; Delays; Network coding; Receivers; Redundancy; Streaming media; content and network-aware redundancy allocation; network coding; wireless mesh networks (ID#: 15-3710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928670&isnumber=6928645
Avallone, S.; Di Stasi, G., "WiMesh: A Tool for the Performance Evaluation of Multi-Radio Wireless Mesh Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814062 In this paper we present WiMesh, a software tool we developed during the last ten years of research conducted in the field of multi-radio wireless mesh networks. WiMesh serves two main purposes: (i) to run different algorithms for the assignment of channels, transmission rate and power to the available network radios; (ii) to automatically setup and run ns-3 simulations based on the network configuration returned by such algorithms. WiMesh basically consists of three libraries and three corresponding utilities that allow to easily conduct experiments. All such utilities accept as input an XML configuration file where a number of options can be specified. WiMesh is freely available to the research community, with the purpose of easing the development of new algorithms and the verification of their performances.
Keywords: XML; performance evaluation; telecommunication channels; telecommunication computing; wireless mesh networks; WiMesh; XML configuration; channel assignment; multiradio wireless mesh networks;ns-3 simulations; performance evaluation; research community; software tool; Channel allocation; Libraries; Network topology; Throughput; Topology; Wireless mesh networks; XML (ID#: 15-3711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814062&isnumber=6813963
Arieta, F.; Barabasz, L.T.; Santos, A.; Nogueira, M., "Mitigating Flooding Attacks on Mobility in Infrastructure-Based Vehicular Networks," Latin America Transactions, IEEE (Revista IEEE America Latina), vol.12, no.3, pp.475, 483, May 2014. doi: 10.1109/TLA.2014.6827876 Infrastructure-based Vehicular Networks can be applied in different social contexts, such as health care, transportation and entertainment. They can easily take advantage of the benefices provided by wireless mesh networks (WMNs) to mobility, since WMNs essentially support technological convergence and resilience, required for the effective operation of services and applications. However, infrastructure-based vehicular networks are prone to attacks such as ARP packets flooding that compromise mobility management and users' network access. Hence, this work proposes MIRF, a secure mobility scheme based on reputation and filtering to mitigate flooding attacks on mobility management. The efficiency of the MIRF scheme has been evaluated by simulations considering urban scenarios with and without attacks. Analyses show that it significantly improves the packet delivery ratio in scenarios with attacks, mitigating their intentional negative effects, as the reduction of malicious ARP requests. Furthermore, improvements have been observed in the number of handoffs on scenarios under attacks, being faster than scenarios without the scheme.
Keywords: mobility management (mobile radio);telecommunication security; wireless mesh networks; ARP packets flooding; MIRF; WMN; filtering; flooding attacks mitigation ;handoffs; infrastructure-based vehicular networks; malicious ARP requests; mobility management; negative effects; network access; packet delivery ratio; secure mobility scheme; technological convergence; wireless mesh networks; Filtering; Floods; IP networks; Internet; Mobile radio mobility management; Monitoring; Flooding Attacks; Mobility; Security; Vehicular Networks (ID#: 15-3712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827876&isnumber=6827455
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Upcoming Events |
Mark your calendars!
This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.
Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.
(ID#:15-3732)
TCC 2015 (12th Theory of Cryptography Conference), 23-25 March, Warsaw, Poland. See http://www.iacr.org/workshops/tcc2015/
PKC 2015 (IACR International Conference on Practice and Theory of Public-Key Cryptography), 30 March-1 April, Gaithersburg, MD. See http://www.iacr.org/workshops/pkc2015/
EUROCRYPT 2015 (34th Annual International Conference on the Theory and Applications of Cryptographic Techniques), 26-30 April, Sofia, Bulgaria. See https://www.cosic.esat.kuleuven.be/eurocrypt_2015/
CSF 2015 (28th IEEE Computer Security Foundations Symposium), 13-17 July, Verona, Italy. See http://csf2015.di.univr.it/
SECRYPT 2015 (12th International Conference on Security and Cryptography), 20-22 July, Alsace, France. See http://secrypt.icete.org/
PODC 2015 (ACM Symposium on Principles of Distributed Computing), 21-23 July, San Sebastian, Portugal. See http://www.podc.org/
CRYPTO 2015 (35th Annual Cryptology Conference), 16-20 August, Santa Barbara, CA. See http://www.iacr.org/conferences/crypto2015/
CCS 2015 (22nd ACM Conference on Computer and Communications Security), 12-16 October, Denver, CO. See www.sigsac.org/ccs/CCS2015/
FOCS 2015 (56th Annual IEEE Symposium on Foundations of Computer Science), 18-20 October, Berkeley, CA. See http://www.cs.cmu.edu/~venkatg/FOCS-2015-cfp.html
ASIACRYPT 2015 (21st Annual International Conference on the Theory and Application of Cryptology and Information Security, 29 November-3 December, Auckland, New Zealand. See https://www.math.auckland.ac.nz/~sgal018/AC2015/index.html
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.