Science of Security (SoS) Newsletter (2015 - Issue 7)
Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.
Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:
Publications of Interest
The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.
Table of Contents
Science of Security (SoS) Newsletter (2015 - Issue 7)
(ID#:15-6149)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
In The News |
This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.
US News
"Russian Attackers Hack Pentagon", InfoSecurity Magazine, 07 August 2015. [Online]. On around July 25th, the Pentagon was forced to shut down the server for its Joint Chiefs of Staff unclassified email system after an attack by Russian attackers. It is not known for sure whether or not the attack, which resulted in the leak of "large quantities of data", was authorized by the Russian Government or not. (ID#: 15-50423) See http://www.infosecurity-magazine.com/news/russian-attackers-hack-pentagon/
"Chinese Hackers May Have Burrowed Into Airlines", Tech News World, 11 August 2015. [Online]. Travel reservations processor Sabre confimed that it suffered a breach in systems containing sensitive data on as many as a billion passengers. United Airlines, which shares some network infrastructure with Sabre, is still recovering from an incident last month that was speculated to have been an attack, leading government officials to believe that China-based hackers are targeting travel infrastructure. (ID#: 15-50424) See http://www.technewsworld.com/story/82365.html
"Terracotta VPN, the Chinese VPN Service as Hacking Platform", Cyber Defense Magazine, 06 August 2015. [Online]. RSA Security reports that Chinese virtual private network provider Terracotta VPN use brute-force attacks and Trojans on vulnerable Windows servers to provide infrastructure for launching cyber attacks. Terracotta uses these compromised servers to offer a service that allows hackers to launch cyber attacks from seemingly legitimate and respected IP addresses. (ID#: 15-50425) See http://www.cyberdefensemagazine.com/terracotta-vpn-the-chinese-vpn-service-as-hacking-platform/
"Planned Parenthood reports second website hack in a week", Reuters, 30 July 2015. [Online]. Following controversy over alleged sale of illegal fetal tissue, Planned Parenthood has announced that it's websites were hit with a large DDoS attack that prompted the organization to keep its websites offline for the day. The day before, Planned Parenthood announced an attack against its information systems, possibly resulting in the compromise of employee's personal information. (ID#: 15-50422) Seehttp://www.reuters.com/article/2015/07/30/us-usa-plannedparenthood-cyberattack-idUSKCN0Q409120150730
"Ashley Madison attack prompts spam link deluge", BBC, 31 July 2015. [Online]. Infidelity website Ashley Madison suffered a breach in which attackers claimed to have personal information from 37 million accounts, claiming that they would release it if the website was not shut down. The hackers have not yet released the data, prompting spammers to release fake links to the non-extant data. Many of these links, according to a BBC investigation, lead victims to fake data, scam pages, and malware. (ID#: 15-50411) See http://www.bbc.com/news/technology-33731183
"Windows 10 Will Use Virtualization For Extra Security", Information Week, 22 July 2015. [Online]. The highly anticipated Windows 10 operating system has many new features that are being marketed to consumers, but one over-looked advancement that doesn't appeal to the non-tech-savvy is security. Microsoft claims to have taken a fundamentally new approach to security; new features like virtualization place critical operating system components in their own containers, making them inaccessible to hackers. (ID#: 15-50417) See http://www.informationweek.com/software/operating-systems/windows-10-will-use-virtualization-for-extra-security/a/d-id/1321415
International News
"Russian Hackers Threw Molotov Cocktails At Cybersecurity Firm That Exposed ATM Malware", International Business Times, 30 September 2015. [Online]. A group of Russian hackers, known mainly for targetting ATMs, were recently exposed by Dr. Web, a Russian cyber security company. Now Dr. Web is claiming that the hackers have retalitated, but not by any sort of cyber attack. The accused hackers reportedly hurled molotov cocktails at Dr. Web's office in St. Petersburg on at least two seperate occasions.
See: http://www.ibtimes.com/russian-hackers-threw-molotov-cocktails-cybersecurity-firm-exposed-atm-malware-2121091
"Expert believes cybersecurity often focuses on the wrong threats", Tulsa World, 30 September 2015. [Online]. Cyber security expert Peter Warren Singer claimed that cyber security as an industry is focusing on the wrong problems. He believes that the real problem is piracy, even calling it the "biggest theft ever seen in the world." Singer went on to explain that he feels the excitement around cyber security causes government to act without fully analyzing certain situations.
See: http://www.tulsaworld.com/business/technology/expert-believes-cybersecurity-often-focuses-on-the-wrong-threats/article_04bdbafe-ba06-560a-9818-541c1b338b7c.html
"KeyRaider Malware Busts iPhone Jailbreakers", Tech News World, 03 September 2015. [Online]. Malicious software, now being called KeyRaider, has affected a multitude of jailbroken iPhone users. The malware infiltrated the phones through the third-party app store, Cydia. Reports claim that the malware has stolen up to 225,000 active Apple accounts, certificates, and even receipts.
See: http://www.technewsworld.com/story/82450.html
"Cybersecurity bill could 'sweep away' internet users' privacy, agency warns", The Guardian, 3 August 2015. [Online]. A new revision of the Cybersecurity Information Sharing Act bill will be voted on by the Senate. The bill allows companies with large amounts of information to share it with the appropriate government agencies, who can then share the information as they see fit. The bill has turned a lot of attention to companies such as Google and Facebook who possess large amounts of user's data and online habits. (ID#: 15-60044)
See: http://www.theguardian.com/world/2015/aug/03/cisa-homeland-security-privacy-data-internet
"Hacking Victim JPMorgan Chasing Cybersecurity Fixes", Investors, 4 August 2015. [Online]. Last year, JP Morgan Chase suffered a cyber attack that compromised the contact information of roughly 76 million customers. Although no accounts or social security numbers were taken, the company is planning on taking measures to prevent another major attack. The bank says that theire cyber security budget will be increased from $250 million to $500 million in order to improve upon their analytics, testing and coverage. (ID#: 15- 60043)
See: http://news.investors.com/business/080415-764935-jpmorgan-chase-to-double-cybersecurity-spending.htm
"Hackers Remotely Kill a Jeep on the Highway - With Me in it", Wired, 21 July 2015. [Online]. Charlie Miller and Chris Valase successfully hacked in to a Jeep Cherokee from a remote computer, all while the car was being driven miles away. The two were able to take full control of nearly everything from the windshield wipers and air conditioning to the steering wheel itself. They plan on releasing some of their findings at Black Hat in Las Vegas in August. (ID#: 15-60042)
See: http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
"The Dinosaurs of Cybersecurity Are Planes, Power Grids and Hospitals", Tech Crunch, 10 July 2015. [Online]. One of the most prominent risks in cybersecurity comes in the form of infrastructure and things like airplanes and hospitals. As these systems are compromised, patches are developed to remedy the problem. However patches are slow to roll out and take a great deal of time to develop. By the time patches are complete, often, the damage has already been done. (ID#: 15-60040)
See: http://techcrunch.com/2015/07/10/the-dinosaurs-of-cybersecurity-are-planes-power-grids-and-hospitals/
"Microsoft is Reportedly Planning to Buy an Israeli Cyber Security Firm for $320 Million", Business Insider, 20 July 2015. [Online]. A new report shows that Microsoft has a deal in place to purchase the Israeli cybersecurity company, Adallom. Adallom is expected to become Microsoft's cyber security center for the entirety of Israel. Adallom was founded in 2012 and has since grown to 80 employees. (ID#: 15-60041)
See: http://www.businessinsider.com/r-microsoft-to-buy-israeli-cyber-security-firm-adallom-report-2015-7
"Baby Monitors Riddled with Security Holes", Tech News World, 02 September 2015. [Online]. Rapid7 released a report detailing their study of several major brands of baby monitors recently. The report stated that many top brands are littered with vulnerabilities. One top consultant for the group said that many of the security flaws would allow the video and audio from the monitors to be watched anywhere.
See: http://www.technewsworld.com/story/82449.html
(ID#: 15-6150)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Security Related Conferences |
The following pages provide highlights on Science of Security related research presented at the following International Conferences.
(ID#: 15-6151)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Conference on Information Science and Control Engineering (ICISCE) Shanghai, China |
The 2nd International Conference on Information Science and Control Engineering (ICISCE) was held in Shanghai, China on 24-26 April 2015. While the conference covered a wide range of topics in computing and control systems, the works cited here focused specifically on security topics likely to be of interest to the Science of Security community.
Zheng-Qi Kang; Ke-Wei Lv, "New Results on the Hardness of ElGamal and RSA Bits Based on Binary Expansions," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp.336,340, 24-26 April 2015. doi:10.1109/ICISCE.2015.81
Abstract: González Vasco et al. extend the area of application of algorithms for the hidden number problem in 2004. Using this extension and relations among the bits in and binary fraction expansion of x mod p/p, we present a probabilistic algorithm for some trapdoor functions to recover a hidden message when an imperfect oracle is given of predicting most significant bits in hidden message. We show that computing the most significant bit in message encrypted by ElGmal encryption function is as hard as computing the entire plaintext, and so is RSA.
Keywords: public key cryptography; ElGamal bits; ElGamal encryption function; RSA bits; binary expansions; imperfect oracle; probabilistic algorithm; trapdoor functions; Monte Carlo methods; Polynomials; Prediction algorithms; Probabilistic logic; Public key; ElGamal; Hidden Number Problem; Most Significant Bit; RSA (ID#: 15-6277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120621&isnumber=7120439
Kai Guo; Pengyan Shen; Mingzhong Xiao; Quanqing Xu, "UBackup-II: A QoS-Aware Anonymous Multi-cloud Storage System," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 522, 527, 24-26 April 2015. doi:10.1109/ICISCE.2015.122
Abstract: We present UBackup-II, an anonymous storage overlay network based on personal multi-cloud storages, with flexible QoS awareness. We reform the original Tor protocol by extending the command set and adding a tail part to the Tor cell, which makes it possible for coordination among proxy servers and still keeps the anonymity. Thus, users can upload and download files secretly under the cover of several proxy servers. Moreover, users can develop a personalized QoS policy leading different hidden access patterns according to their own QoS requirement. We presented the design of UBackup-II in detail, analyzed the security policy and showed how different QoS policies works by conducting a simulating experiment.
Keywords: cloud computing; file servers; protocols; quality of service; security of data; storage management; QoS-aware anonymous multicloud storage system; Tor cell; Tor protocol; UBackup-II; anonymous storage overlay network; hidden access patterns; personal multicloud storage; personalized QoS policy; proxy servers; security policy; Cloud computing; Cryptography; Protocols; Quality of service; Servers; Writing; Personal Cloud Storage; Privacy Preserving; QoS (ID#: 15-6278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120662&isnumber=7120439
Xiaoqi Ma, "Managing Identities in Cloud Computing Environments," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 290, 292, 24-26 April 2015. doi:10.1109/ICISCE.2015.71
Abstract: As cloud computing becomes a hot spot of research, the security issues of clouds raise concerns and attention from academic research community. A key area of cloud security is managing users' identities, which is fundamental and important to other aspects of cloud computing. A number of identity management frameworks and systems are introduced and analysed. Issues remaining in them are discussed and potential solutions and countermeasures are proposed.
Keywords: cloud computing; security of data; academic research community; cloud computing environments; cloud security; Authentication; Cloud computing; Computational modeling; Computer architecture; Identity management systems; Servers; Cloud computing; identity management; security (ID#: 15-6279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120611&isnumber=7120439
Yuangang Yao; Jin Yi; Yanzhao Liu; Xianghui Zhao; Chenghao Sun, "Query Processing Based on Associated Semantic Context Inference," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 395, 399, 24-26 April 2015. doi:10.1109/ICISCE.2015.93
Abstract: Context-based query processing methods are used to capture user intents behind query inputs. General context models are not flexible or explicable enough for inference, because they are either static or implicit. This paper improves current context model and proposes a novel query processing approach based on associated semantic context inference. In our approach, the formal defined context is explicit, which is convenient to explore potential information during query processing. Furthermore, the context is dynamically constructed and further modified according to specific query tasks, which ensures the precision of context inference. For given query inputs, the approach builds concrete context models and refines queries based on semantic context inference. Finally, queries are translated into SPARQL for query engine. The experiment shows that the proposed approach can further improve query intents understanding to guarantee precision and recall in retrieval.
Keywords: SQL; inference mechanisms; query processing; SPARQL; context-based query processing methods; dynamically constructed context; explicit formal defined context; information retrieval; precision value; query engine; query inputs; query intent improvement; query refining; query tasks; recall value; semantic context inference; user intent capture; Biological system modeling; Context; Context modeling; Knowledge engineering; Query processing; Semantic Web; Semantics; Context inference; Query processing; Semantic context (ID#: 15-6280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120633&isnumber=7120439
Guifen Zhao; Ying Li; Liping Du; Xin Zhao, "Asynchronous Challenge-Response Authentication Solution Based on Smart Card in Cloud Environment," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 156, 159, 24-26 April 2015. doi:10.1109/ICISCE.2015.42
Abstract: In order to achieve secure authentication, an asynchronous challenge-response authentication solution is proposed. SD key, encryption cards or encryption machine provide encryption service. Hash function, symmetric algorithm and combined secret key method are adopted while authenticating. The authentication security is guaranteed due to the properties of hash function, combined secret key method and one-time authentication token generation method. Generate random numbers, one-time combined secret key and one-time token on the basis of smart card, encryption cards and cryptographic technique, which can avoid guessing attack. Moreover, the replay attack is avoided because of the time factor. The authentication solution is applicable for cloud application systems to realize multi-factor authentication and enhance the security of authentication.
Keywords: cloud computing; message authentication; private key cryptography; smart cards; SD key; asynchronous challenge-response authentication solution; authentication security; cloud application systems; combined secret key method; cryptographic technique; encryption cards; encryption machine; encryption service; hash function; multifactor authentication; one-time authentication token generation method; one-time combined secret key; random number generation; replay attack; smart card; symmetric algorithm; time factor; Authentication; Encryption; Servers; Smart cards; Time factors; One-time password; asynchronous challenge-response authentication; multi-factor authentication; smart card (ID#: 15-6281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120582&isnumber=7120439
Jinglong Zuo; Delong Cui; Yunfeng Gong; Mei Liu, "A Novel Image Encryption Algorithm Based on Lifting-Based Wavelet Transform," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp.33,36, 24-26 April 2015. doi:10.1109/ICISCE.2015.16
Abstract: In order to trade-off between computational effects and computational cost of present image encryption algorithm, a novel image encryption algorithm based on lifting-based wavelet transform is proposed in this paper. The image encryption process includes three steps: first the original image was divided into blocks, which were transformed by lifting based wavelet, secondly the wavelet domain coefficients were encryption by random mask which generated by user key, and finally employing Arnold scrambling to encrypt the coefficients. The security of proposed scheme is depended on the levels of wavelet transform, user key, and the times of Arnold scrambling. Theoretical analysis and experimental results demonstrate that the algorithm is favourable.
Keywords: cryptography; image processing; random processes; wavelet transforms; Arnold scrambling; computational cost; computational effects; image encryption algorithm; lifting-based wavelet transform; random mask; user key; wavelet domain coefficients; Correlation; Encryption; Entropy; Filter banks; Wavelet transforms; block-based transformation; fractional Fourier transform; image encryption; information security; random phase mask (ID#: 15-6282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120556&isnumber=7120439
Min Yu; Chao Liu; Xinliang Qiu; Shuang Zhao; Kunying Liu; Bo Hu, "Modeling and Analysis of Information Theft Trojan Based on Stochastic Game Nets," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 318, 322, 24-26 April 2015. doi:10.1109/ICISCE.2015.77
Abstract: In the paper, we modelling for information theft Trojan based on Stochastic Game Nets (SGN), a novel modelling method which good at multirole game problem described, and has been applied in many fields of networks with interactive behaviors. Combination the SGN and practical problem, we present an algorithm for solving the equilibrium strategy to computer the model of SGN. Finally we analyse our research paper with some indicators, such as the probability of a successful theft and the average time of a successful theft. The results of the paper can also offer some consultations for user.
Keywords: invasive software; probability; stochastic games; SGN; information theft Trojan analysis; information theft Trojan modeling; interactive behaviors; multirole game problem; stochastic game nets; Analytical models; Games; Monitoring; Ports (Computers); Stochastic processes; Trojan horses; Information Theft Trojan; Nash Equilibrium; Security Analysis; Stochastic Game Nets (ID#: 15-6283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120617&isnumber=7120439
Liu Yong-lei, "Defense of WPA/WPA2-PSK Brute Forcer," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 185, 188, 24-26 April 2015. doi:10.1109/ICISCE.2015.48
Abstract: With the appearance of high speed WPA/WPA-PSK brute forcer, the security of WLAN faces serious threats. The attackers can acquire PSK easily so as to decrypt all the traffics. To solve this problem, a series of defence schemes are proposed, including defence schemes for passive and active brute forcers. The schemes adopt active jammer and wireless packet injection. And then the theoretical analysis is processed and the implementation methods are given. In the last past, the conclusions are reached.
Keywords: computer network security; jamming; phase shift keying; telecommunication traffic; wireless LAN; WLAN security; WPA-WPA2-PSK brute forcer defense; active jammer; traffic decryption; wireless packet injection; Jamming; Microwave integrated circuits; Monitoring; Phase shift keying; Protocols; Wireless LAN; Wireless communication; PSK; WLAN; WPA; brute forcer (ID#: 15-6285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120588&isnumber=7120439
Yangqing Zhu; Jun Zuo, "Research on Data Security Access Model of Cloud Computing Platform," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 424, 428, 24-26 April 2015. doi:10.1109/ICISCE.2015.99
Abstract: Cloud computing is a new Internet application mode, has very large scale, virtualization, high reliability, versatility and low cost characteristics. Cloud computing technologies can dynamically manage millions of the computer resources, and on demand assign to a global user. It appears to completely change the old Internet application mode. Since the data was stored in the remote cloud computing platform, thus brought new challenges to information security, for example, disclosure of data, hacker attacks, Trojans and viruses seriously threat user data security. A strict information security scheme must be established, then users can use cloud computing technologies. From based on USB key user authentication, based on attributes access control and data detection, the data security access of cloud computing platform was studied, to provide a secure solution for the user.
Keywords: authorisation; cloud computing; computer viruses; public key cryptography; virtualisation; Internet application mode; Trojans; USB key user authentication; access control; computer viruses; data detection; data disclosure; data security access model; data storage; dynamic computer resource management; hacker attacks; information security; remote cloud computing platform; strict-information security scheme; virtualization; Authentication; Certification; Cloud computing; Public key; Servers; Universal Serial Bus; Authentication; Cloud Computing; Model; Public Key Infrastructure; USB Key (ID#: 15-6286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120639&isnumber=7120439
Ji-Li Luo; Meng-Jun Li; Jiang Jiang; Han-Lin You; Yin-Ye Li; Fang-Zhou Chen, "Combat Capability Assessment Approach of Strategic Missile Systems Based on Evidential Reasoning," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 665, 669, 24-26 April 2015. doi:10.1109/ICISCE.2015.153
Abstract: Combat capability assessment of strategic missiles systems is an important component of national security strategic decision-making. In view of the drawbacks existing in current system modelling methods, assessment indicators and assessment approaches, a model of combat system based on the operation loops is constructed. According to the system model and weapon properties, this paper proposes the system assessment indicators, calculates the weight value and devises the assignments of the indicators based on evidential reasoning and the assessment algorithm of systematic combat capability. The approach is proved to be effective by the examples of the typical equipment systems in the US Strategic Missile Force and in Russia's Strategic Missile Force.
Keywords: decision making; inference mechanisms; military aircraft; military computing; missiles; national security; Russia Strategic Missile Force; US Strategic Missile Force; combat capability assessment approach; evidential reasoning; national security strategic decision-making; operation loops; strategic missile systems; system assessment indicators; systematic combat capability; Cognition; Force; Missiles; Modeling; Peer-to-peer computing; Reliability; Strategic Missile Systems; combat capability assessment; evidential reasoning; operation loops (ID#: 15-6287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120693&isnumber=7120439
Shi-Wei Zhao; Ze-Wen Cao; Wen-Sen Liu, "OSIA: Open Source Intelligence Analysis System Based on Cloud Computing and Domestic Platform," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 371, 375, 24-26 April 2015. doi:10.1109/ICISCE.2015.89
Abstract: Information safety is significant for state security, especially for intelligence service. OSIA (open source intelligence analyzing) system based on cloud computing and domestic platform is designed and implemented in this paper. For the sake of the security and utility of OSIA, all of the middleware and involved OS are compatible with domestic software. OSIA system concentrates on analyzing open source text intelligence and adopts self-designed distributed crawler system so that a closed circle is formed from intelligence acquisition to analysis process and push service. This paper also illustrates some typical applications of anti-terrorist, such as the "organizational member discovery" based on Stanford parser and cluster algorithm, the "member relation exhibition" based on paralleled PageRank algorithm and the like. The results of experiences show that the OSIA system is suitable for large scale textual intelligence analysis.
Keywords: cloud computing; data mining; grammars; middleware; parallel algorithms; public domain software; security of data; text analysis; OS; OSIA system; Stanford parser; antiterrorist; cloud computing; cluster algorithm; domestic platform; domestic software; information safety; intelligence acquisition; intelligence service; large scale textual intelligence analysis; member relation exhibition; middleware; open source intelligence analysis system; open source text intelligence; organizational member discovery; paralleled PageRank algorithm; push service; self-designed distributed crawler system; Algorithm design and analysis; Artificial intelligence; Crawlers; Operating systems; Security; Servers; Text mining; cloud computing; domestic platform; intelligence analysis system; text mining (ID#: 15-6288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120629&isnumber=7120439
Lei-fei Xuan; Pei-fei Wu, "The Optimization and Implementation of Iptables Rules Set on Linux," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 988, 991, 24-26 April 2015. doi:10.1109/ICISCE.2015.223
Abstract: Firewall, as a mechanism of compulsory access control between the network or system, is an important means to ensure the network security. Firewall can be a very simple filter, but also it can be a carefully targeted gateway. But the principle is the same, which is monitoring and filtering all the information exchanged in internal and external networks. Linux as an open source operating system, is famous for it's stability and security.netfilter/iptables is a firewall system based on Linux which has a great function. This thesis first analysed the working principle of pintables, then introduced pintables rule set, and last proposed an effective algorithm to optimize the rules set which is implemented based on Linux system. In the part of implementation, some key code of the algorithm are given.
Keywords: Linux; authorisation; firewalls; public domain software; Linux system; compulsory access control mechanism; external networks; firewall system; information exchange; internal networks; iptables rules set implementation; iptables rules set optimization; key code; netfilter; network security; open source operating system; Control engineering; Information science; firewall; iptables; linux; optimization; rules set (ID#: 15-6289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120763&isnumber=7120439
Rong-Tsu Wang; Chin-Tsu Chen, "Framework Building and Application of the Performance Evaluation in Marine Logistics Information Platform in Taiwan," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 245, 249, 24-26 April 2015. doi:10.1109/ICISCE.2015.61
Abstract: This paper has conducted a trial in establishing a systematic instrument for evaluating the performance of the marine information systems. Analytic Network Process (ANP) was introduced for determining the relative importance of a set of interdependent criteria concerned by the stakeholders (shipper/consignee, customer broker, forwarder, and container yard). Three major information platforms (MTNet, TradeVan, and Nice Shipping) in Taiwan were evaluated according to the criteria derived from ANP. Results show that the performance of marine information system can be divided into three constructs, namely: Safety and Technology (3 items), Service (3 items), and Charge (3 items). The Safety and Technology is the most important construct of marine information system evaluation, whereas Charger is the least important construct. This study give insights to improve the performance of the existing marine information systems and serve as the useful reference for the future freight information platform.
Keywords: analytic hierarchy process; containerisation; information systems; logistics data processing; marine engineering; ANP; MTNet; NiceShipping; Taiwan; TradeVan; analytic network process; charge construct; consignee; container yard; customer broker; forwarder; freight information platform; interdependent criteria;marine information systems; marine logistics information platform; performance evaluation; safety-and-technology construct; service construct; shipper; systematic instrument; Decision making; Information systems; Performance evaluation; Safety; Security; Supply chains; Transportation; Analytic Network Process; Logistics Information Platform; Marine; Performance (ID#: 15-6290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120601&isnumber=7120439
Min Chen; Jie Xue, "Optimized Context Quantization for I-Ary Source," Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, pp. 367, 370, 24-26 April 2015. doi:10.1109/ICISCE.2015.88
Abstract: In this paper, the optimal Context quantization for the source is present. By considering correlations among values of source symbols, these conditional probability distributions are sorted by values of conditions firstly. Then the dynamic programming is used to implement the Context quantization. The description length of the Context model is used as the judgment parameter. Based on the criterion that the neighbourhood conditional probability distributions could be merged, our algorithm finds the optimal structure with minimum description length and the optimal Context quantization results could be achieved. The experiment results indicate that the proposed algorithm could achieve the similar result with other adaptive Context quantization algorithms with reasonable computational complexity.
Keywords: computational complexity; data compression; dynamic programming; image coding; probability; I-ary source; computational complexity; dynamic programming; neighbourhood conditional probability distribution; optimized context quantization; source symbol; Context; Context modeling; Dynamic programming; Heuristic algorithms; Image coding; Probability distribution; Quantization (signal); Context Quantization; Description Length; Dynamic Programming (ID#: 15-6291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120628&isnumber=7120439
Patel, Subhash Chandra; Singh, Ravi Shankar; Jaiswal, Sumit, "Secure and Privacy Enhanced Authentication Framework for Cloud Computing," Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, pp. 1631, 1634, 26-27 Feb. 2015. doi:10.1109/ECS.2015.7124863
Abstract: Cloud computing is a revolution in information technology. The cloud consumer outsources their sensitive data and personal information to cloud provider's servers which is not within the same trusted domain of data-owner so most challenging issues arises in cloud are data security users privacy and access control. In this paper we also have proposed a method to achieve fine grained security with combined approach of PGP and Kerberos in cloud computing. The proposed method provides authentication, confidentiality, integrity, and privacy features to Cloud Service Providers and Cloud Users.
Keywords: Access control; Authentication; Cloud computing; Cryptography; Privacy; Servers; Cloud computing; Kerberos; Pretty Good Privacy; access control; authentication; privacy; security (ID#: 15-6292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124863&isnumber=7124722
Kulkarni, S.A.; Patil, S.B., "A Robust Encryption Method for Speech Data Hiding in Digital Images for Optimized Security, " Pervasive Computing (ICPC), 2015 International Conference on, pp. 1, 5, 8-10 Jan. 2015. doi:10.1109/PERVASIVE.2015.7087134
Abstract: Steganography is a art of hiding information in a host signal. It is very important to hide the secret data efficiently, as many attacks made on the data communication. The host signal can be a still image, speech or video and the message signal that is hidden in the host signal can be a text, image or an audio signal. The cryptography concept is used for locking the secret message in the cover file. The cryptography makes the secret message not understood unless the decryption key is available. It is related with constructing and analyzing various methods that overcome the influence of third parties. Modern cryptography works on the disciplines like mathematics, computer science and electrical engineering. In this paper a symmetric key is developed which consists of reshuffling and secret arrangement of secret signal data bits in cover signal data bits. In this paper the authors have performed the encryption process on secret speech signal data bits-level to achieve greater strength of encryption which is hidden inside the cover image. The encryption algorithm applied with embedding method is the robust secure method for data hiding.
Keywords: cryptography; image coding; speech coding; steganography; cover image; cryptography concept; data communication; decryption key; digital images; embedding method; host signal; optimized security; robust encryption method; secret signal data bit reshuffling; secret signal data bit secret arrangement; speech data hiding; steganography; symmetric key; Encryption; Noise; Receivers; Robustness; Speech; Transmitters; Cover signal; Cryptography; Encryption; Secret key; Secret signal (ID#: 15-6293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7087134&isnumber=7086957
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Cyber and Information Security Research Oak Ridge, Tennessee |
The 10th Annual Cyber and Information Security Research (CISR) Conference was held at Oak Ridge, Tennessee on April 7-9, 2015. The conference themes focused on Resilience: theory, practice, and tools for rapidly resuming critical functionality following a cyber disruption, or maintaining critical functionality during an ongoing attack; Situational Awareness (SA): tools and practice for providing SA for cyber defenders; Moving Target Defense: methods and tools for creating asymmetric uncertainty that favors defenders over attackers, or that increase the potential cost for attackers; and Cyber Physical Security: methods for protecting both national critical infrastructure and local embedded systems. The papers cited here were recovered on September 2, 2015.
Robert K. Abercrombie, Frederick T. Sheldon, Bob G. Schlicher. “Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems." CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 18. Doi: 10.1145/2746266.2746284
Abstract: In cybersecurity, there are many influencing economic factors to weigh. This paper considers the defender-practitioner stakeholder points-of-view that involve cost combined with development and deployment considerations. Some examples include the cost of countermeasures, training and maintenance as well as the lost opportunity cost and actual damages associated with a compromise. The return on investment (ROI) from countermeasures comes from saved impact costs (i.e., losses from violating availability, integrity, confidentiality or privacy requirements). A measured approach that informs cybersecurity practice is pursued toward maximizing ROI. To this end for example, ranking threats based on their potential impact focuses security mitigation and control investments on the highest value assets, which represent the greatest potential losses. The traditional approach uses risk exposure (calculated by multiplying risk probability by impact). To address this issue in terms of security economics, we introduce the notion of Cybernomics. Cybernomics considers the cost/benefits to the attacker/defender to estimate risk exposure. As the first step, we discuss the likelihood that a threat will emerge and whether it can be thwarted and if not what will be the cost (losses both tangible and intangible). This impact assessment can provide key information for ranking cybersecurity threats and managing risk.
Keywords: Availability, Dependability, Integrity, Security Measures/Metrics, Security Requirements, Threats and Vulnerabilities (ID#: 15-6439)
URL: http://doi.acm.org/10.1145/2746266.2746284
Dan Du, Lu Yu, Richard R. Brooks. "Semantic Similarity Detection for Data Leak Prevention." CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 4. Doi: 10.1145/2746266.2746270
Abstract: To counter data breaches, we introduce a new data leak prevention (DLP) approach. Unlike regular expression methods, our approach extracts a small number of critical semantic features and requires a small training set. Existing tools concentrate mostly on data format where most defense and industry applications would be better served by monitoring the semantics of information in the enterprise. We demonstrate our approach by comparing its performance with other state-of-the-art methods, such as latent dirichlet allocation (LDA) and support vector machine (SVM). The experiment results suggest that the proposed approach have superior accuracy in terms of detection rate and false-positive (FP) rate.
Keywords: DLP, LDA, SVM, semantic similarity (ID#: 15-6440)
URL: http://doi.acm.org/10.1145/2746266.2746270
Susan M. Bridges, Ken Keiser, Nathan Sissom, Sara J. Graves. “Cyber Security for Additive Manufacturing.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 14. Doi: 10.1145/2746266.2746280
Abstract: This paper describes the cyber security implications of additive manufacturing (also known as 3-D printing). Three-D printing has the potential to revolutionize manufacturing and there is substantial concern for the security of the storage, transfer and execution of 3-D models across digital networks and systems. While rapidly gaining in popularity and adoption by many entities, additive manufacturing is still in its infancy. Supporting the broadest possible applications the technology will demand the ability to demonstrate secure processes from ideas, design, prototyping, production and delivery. As with other technologies in the information revolution, additive manufacturing technology is at risk of outpacing a competent security infrastructure so research and solutions need to be tackled in concert with the 3-D boom.
Keywords: 3-D Printing, Additive Manufacturing, Cybersecurity (ID#: 15-6441)
URL: http://doi.acm.org/10.1145/2746266.2746280
Ryan Grandgenett, William Mahoney, Robin Gandhi. “Authentication Bypass and Remote Escalated I/O Command Attacks.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 2. Doi: 10.1145/2746266.2746268
Abstract: The Common Industrial Protocol (CIP) is a widely used Open DeviceNet Vendors Association (ODVA) standard [14]. CIP is an application-level protocol for communication between components in an industrial control setting such as a Supervisory Control And Data Acquisition (SCADA) environment. We present exploits for authentication and privileged I/O in a CIP implementation. In particular, Allen Bradley's implementation of CIP communications between its programming software and Programmable Logic Controllers (PLCs) is the target of our exploits. Allen Bradley's RSLogix 5000 software supports programming and centralized monitoring of Programmable Logic Controllers (PLCs) from a desktop computer. In our test bed, ControlLogix EtherNet/IP Web Server Module (1756-EWEB) allows the PLC Module (5573-Logix) to be programmed, monitored and controlled by RSLogix 5000 over an Ethernet LAN. Our vulnerability discovery process included examination of CIP network traffic and reverse engineering the RSLogix 5000 software. Our findings have led to the discovery of several vulnerabilities in the protocol, including denial-of-service attacks, but more significantly and recently the creation of an authentication bypass and remote escalated privileged I/O command exploit. The exploit abuses RSLogix 5000's use of hard-coded credentials for outbound communication with other SCADA components. This paper provides a first public disclosure of the vulnerability, exploit development process, and results.
Keywords: Control Systems, EtherNet/IP, Remote Code Execution, SCADA (ID#: 15-6442)
URL: http://doi.acm.org/10.1145/2746266.2746268
Suzanna Schmeelk, Junfeng Yang, Alfred Aho. “Android Malware Static Analysis Techniques.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 5. Doi: 10.1145/2746266.2746271
Abstract: During 2014, Business Insider announced that there are over a billion users of Android worldwide. Government officials are also trending towards acquiring Android mobile devices. Google's application architecture is already ubiquitous and will keep expanding. The beauty of an application-based architecture is the flexibility, interoperability and customizability it provides users. This same flexibility, however, also allows and attracts malware development. This paper provides a horizontal research analysis of techniques used for Android application malware analysis. The paper explores techniques used by Android malware static analysis methodologies. It examines the key analysis efforts used by examining applications for permission leakage and privacy concerns. The paper concludes with a discussion of some gaps of current malware static analysis research.
Keywords: Android Application Security, Cyber Security, Java, Malware Analysis, Static Analysis (ID#: 15-6443)
URL: http://doi.acm.org/10.1145/2746266.2746271
Mark Pleszkoch, Rick Linger. “Controlling Combinatorial Complexity in Software and Malware Behavior Computation.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 15. Doi: 10.1145/2746266.2746281
Abstract: Virtually all software is out of intellectual control in that no one knows its full behavior. Software Behavior Computation (SBC) is a new technology for understanding everything software does. SBC applies the mathematics of denotational semantics implemented by function composition in Functional Trace Tables (FTTs) to compute the behavior of programs, expressed as disjoint cases of conditional concurrent assignments. In some circumstances, combinatorial explosions in the number of cases can occur when calculating the behavior of sequences of multiple branching structures. This paper describes computational methods that avoid combinatorial explosions. The predicates that control branching structures such as ifthenelses can be organized into three categories: 1) Independent, resulting in no behavior case explosion, 2) Coordinated, resulting in two behavior cases, or 3) Goal-oriented, with potential exponential growth in the number of cases. Traditional FTT-based behavior computation can be augmented by two additional computational methods, namely, Single-Value Function Abstractions (SVFAs) and, introduced in this paper, Relational Trace Tables (RTTs). These methods can be applied to the three predicate categories to avoid combinatorial growth in behavior cases while maintaining mathematical correctness.
Keywords: Hyperion system, Software behavior computation, malware (ID#: 15-6444)
URL: http://doi.acm.org/10.1145/2746266.2746281
Xingsi Zhong, Paranietharan Arunagirinathan, Afshin Ahmadi, Richard Brooks, Ganesh Kumar Venayagamoorthy. “Side-Channels in Electric Power Synchrophasor Network Data Traffic.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 3. Doi: 10.1145/2746266.2746269
Abstract: The deployment of synchrophasor devices such as Phasor Measurement Units (PMUs) in an electric power grid enhances real-time monitoring, analysis and control of grid operations. PMU information is sensitive, and any missing or incorrect PMU data could lead to grid failure and/or damage. Therefore, it is important to use encrypted communication channels to avoid any cyber attack. However, encrypted communication channels are vulnerable to side-channel attacks. In this study, side-channel attacks using packet sizes and/or inter-packet timing delays differentiate the stream of packets from any given PMU within an encrypted tunnel. This is investigated under different experimental settings. Also, virtual private network vulnerabilities due to side-channel analysis are discussed.
Keywords: Cyber-attacks, cybersecurity, grid operation data, hidden Markov model, phasor measurement units, power system, side-channel analysis (ID#: 15-6445)
URL: http://doi.acm.org/10.1145/2746266.2746269
Zoleikha Abdollahi Biron, Pierluigi Pisu, Baisravan HomChaudhuri. “Observer Design Based Cyber Security for Cyber Physical Systems.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 6. Doi: 10.1145/2746266.2746272
Abstract: In this paper, an observer based cyber-attack detection and estimation methodology for cyber physical systems is presented. The cyber-attack is considered to influence the physical part of the cyber physical system that compromises human safety. The cyber-attacks are considered to affect the sensors and the actuators in the sub-systems as well as the software programs of the control systems in the cyber physical system. The whole system is modeled as a hybrid system to incorporate the discrete and continuous part of the cyber physical system and a sliding mode based observer is designed for the detection of these cyber-attacks. For simulation purposes, this paper considers different cyber-attacks on the battery sub-system of modern automobiles and the simulation results of attack detection are presented in the paper.
Keywords: Cyber Physical System, Cyber Security, In-vehicle Network, Sliding Mode Observer (ID#: 15-6446)
URL: http://doi.acm.org/10.1145/2746266.2746272
Yu Fu, Benafsh Husain, Richard R. Brooks. “Analysis of Botnet Counter-Counter-Measures.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 9. Doi: 10.1145/2746266.2746275
Abstract: Botnets evolve quickly to outwit police and security researchers. Since they first appeared in 1993, there have been significant botnet countermeasures. Unfortunately, countermeasures, especially takedown operations, are not particularly effective. They destroy research honeypots and stimulate botmasters to find creative ways to hide. Botnet reactions to countermeasures are more effective than countermeasures. Also, botnets are no longer confined to PCs. Android and iOS platforms are increasingly attractive targets. This paper focuses on recent countermeasures against botnets and counter-countermeasures of botmasters. We look at side effects of botnet takedowns as insight into botnet countermeasures. Then, botnet counter-countermeasures against two-factor-authentication (2FA) are discussed in Android and iOS platform. Representative botnet-in-the-mobile (BITM) implementations against 2FA are compared, and a theoretical iOS-based botnet against 2FA is described. Botnet counter-countermeasures against keyloggers are discussed. More attention needs to be paid to botnet issues.
Keywords: 2FA, Android, Botnet, iOS, keyloggers, takedown (ID#: 15-6447)
URL: http://doi.acm.org/10.1145/2746266.2746275
Michael Iannacone, Shawn Bohn, Grant Nakamura, John Gerth, Kelly Huffer, Robert Bridges, Erik Ferragut, John Goodall. “Developing an Ontology for Cyber Security Knowledge Graphs.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 12. Doi: 10.1145/2746266.2746278
Abstract: In this paper we describe an ontology developed for a cyber security knowledge graph database. This is intended to provide an organized schema that incorporates information from a large variety of structured and unstructured data sources, and includes all relevant concepts within the domain. We compare the resulting ontology with previous efforts, discuss its strengths and limitations, and describe areas for future work.
Keywords: cyber security, information extraction, ontology architecture, security automation (ID#: 15-6448)
URL: http://doi.acm.org/10.1145/2746266.2746278
Christopher Robinson-Mallett, Sebastian Hansack. “A Model of an Automotive Security Concept Phase.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 16. Doi: 10.1145/2746266.2746282
Abstract: The introduction of wireless interfaces into cars raises new security-related risks to the vehicle and passengers. Vulnerabilities of the vehicle electronics to remote attacks through internet connections have been demonstrated recently. The introduction of industrial-scale processes, methods and tools for the development and quality assurance of appropriate security-controls into vehicle electronics is an essential task for system providers and vehicle manufacturers to cope with security hazards. In this contribution a process model for security analysis tasks during automotive systems development is presented. The proposed model is explained on the vulnerabilities in a vehicle's remote unlock function recently published by Spaar.
Keywords: Analysis, Process, Requirements, Security (ID#: 15-6449)
URL: http://doi.acm.org/10.1145/2746266.2746282
Paul Carsten, Todd R. Andel, Mark Yampolskiy, Jeffrey T. McDonald. “In-Vehicle Networks: Attacks, Vulnerabilities, and Proposed Solutions.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 1. Doi: 10.1145/2746266.2746267
Abstract: Vehicles made within the past years have gradually become more and more complex. As a result, the embedded computer systems that monitor and control these systems have also grown in size and complexity. Unfortunately, the technology that protects them from external attackers has not improved at a similar rate. In this paper we discuss the vulnerabilities of modern in-vehicle networks, focusing on the Controller Area Network (CAN) communications protocol as a primary attack vector. We discuss the vulnerabilities of CAN, the types of attacks that can be used against it, and some of the solutions that have been proposed to overcome these attacks.
Keywords: Automotive Vulnerabilities, CAN bus, In-Vehicle Networks (ID#: 15-6450)
URL: http://doi.acm.org/10.1145/2746266.2746267
Hani Alturkostani, Anup Chitrakar, Robert Rinker, Axel Krings. “On the Design of Jamming-Aware Safety Applications in VANETs.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 7. Doi: 10.1145/2746266.2746273
Abstract: Connected vehicles communicate either with each other or with the fixed infrastructure using Dedicated Short Range Communication (DSRC). The communication is used by DSRC safety applications, such as forward collision warning, which are intended to reduce accidents. Since these safety applications operate in a critical infrastructure, reliability of the applications is essential. This research considers jamming as the source of a malicious act that could significantly affect reliability. Previous research has discussed jamming detection and prevention in the context of wireless networks in general, but little focus has been on Vehicular Ad Hoc Networks (VANET), which have unique characteristics. Other research discussed jamming detection in VANET, however it is not aligned with current DSRC standards. We propose a new jamming-aware algorithm for DSRC safety application design for VANET that increases reliability using jamming detection and consequent fail-safe behavior, without any alteration of existing protocols and standards. The impact of deceptive jamming on data rates and the impact of the jammer's data rate were studied using actual field measurements. Finally, we show the operation of the jamming-aware algorithm using field data.
Keywords: DSRC, Jammer Detection, Jamming, VANET (ID#: 15-6451)
URL: http://doi.acm.org/10.1145/2746266.2746273
Lu Yu, Juan Deng, Richard R. Brooks, Seok Bae Yun. “Automobile ECU Design to Avoid Data Tampering.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 10. Doi: 10.1145/2746266.2746276
Abstract: Modern embedded vehicle systems are based on network architectures. Vulnerabilities from in-vehicle communications are significant. Privacy and security measures are required for vehicular Electronic Control Units (ECUs). We present a security vulnerability analysis, which shows that the vulnerability mainly lies in the ubiquitous on-board diagnostics II (OBD-II) interface and the memory configuration within ECU. Countermeasures using obfuscation and encryption techniques are introduced to protect ECUs from data sniffing and code tampering. A security scheme of deploying lures that look like ECU vulnerabilities to deceive lurking intruders into installing rootkits is proposed. We show that the interactions between the attacker and the system can be modeled as a Markov decision process (MDP).
Keywords: ECU, MDP, vehicular cyber security (ID#: 15-6452)
URL: http://doi.acm.org/10.1145/2746266.2746276
Jarilyn M. Hernández, Aaron Ferber, Stacy Prowell, Lee Hively. “Phase-Space Detection of Cyber Events.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 13. Doi: 10.1145/2746266.2746279
Abstract: Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The data analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.
Keywords: Energy Delivery Systems, cyber anomaly detection, cyber-attacks, graph theory, malware, phase-space analysis, rootkits (ID#: 15-6453)
URL: http://doi.acm.org/10.1145/2746266.2746279
Mohammad Ashraf Hossain Sadi, Mohd. Hassan Ali, Dipankar Dasgupta, Robert K. Abercrombie. “OPNET/Simulink Based Testbed for Disturbance Detection in the Smart Grid.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 17. Doi: 10.1145/2746266.2746283
Abstract: The important backbone of the Smart Grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. Smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, smart grid is vulnerable to the grid related disturbances. For such a dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in a cyber-network for modern power systems and smart grid. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.
Keywords: Cyber-attacks, Simulation Testbed, Smart Grid security (ID#: 15-6454)
URL: http://doi.acm.org/10.1145/2746266.2746283
Jaewon Yang, Xiuwen Liu, Shamik Bose. “Preventing Cyber-induced Irreversible Physical Damage to Cyber-Physical Systems.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 8. Doi: 10.1145/2746266.2746274
Abstract: Ever since the discovery of the Stuxnet malware, there have been widespread concerns about disasters via cyber-induced physical damage on critical infrastructures. Cyber physical systems (CPS) integrate computation and physical processes; such infrastructure systems are examples of cyber-physical systems, where computation and physical processes are integrated to optimize resource usage and system performance. The inherent security weaknesses of computerized systems and increased connectivity could allow attackers to alter the systems' behavior and cause irreversible physical damage, or even worse cyber-induced disasters. However, existing security measures were mostly developed for cyber-only systems and they cannot be effectively applied to CPS directly. Thus, new approaches to preventing cyber physical system disasters are essential. We recognize very different characteristics of cyber and physical components in CPS, where cyber components are flexible with large attack surfaces while physical components are inflexible and relatively simple with very small attack surfaces. This research focuses on the components where cyber and physical components interact. Securing cyber-physical interfaces will complete a layer-based defense strategy in the "Defense in Depth Framework". In this paper we propose Trusted Security Modules as a systematic solution to provide a guarantee of preventing cyber-induced physical damage even when operating systems and controllers are compromised. TSMs will be placed at the interface between cyber and physical components by adapting the existing integrity enforcing mechanisms such as Trusted Platform Module, Control-Flow Integrity, and Data-Flow Integrity.
Keywords: Cyber-induced physical damage, Trusted Security Module (ID#: 15-6455)
URL: http://doi.acm.org/10.1145/2746266.2746274
Corinne L. Jones, Robert A. Bridges, Kelly M. T. Huffer, John R. Goodall. “Towards a Relation Extraction Framework for Cyber-Security Concepts.” CISR '15 Proceedings of the 10th Annual Cyber and Information Security Research Conference, April 2015, Article No. 11. Doi: 10.1145/2746266.2746277
Abstract: In order to assist security analysts in obtaining information pertaining to their network, such as novel vulnerabilities, exploits, or patches, information retrieval methods tailored to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi-supervised Natural Language Processing and implement a bootstrapping algorithm for extracting security entities and their relationships from text. The algorithm requires little input data, specifically, a few relations or patterns (heuristics for identifying relations), and incorporates an active learning component which queries the user on the most important decisions to prevent drifting from the desired relations. Preliminary testing on a small corpus shows promising results, obtaining precision of .82.
Keywords: active learning, bootstrapping, cyber security, information extraction, natural language processing, relation extraction (ID#: 15-6456)
URL: http://doi.acm.org/10.1145/2746266.2746277
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Electronic Crime Research (eCrime) 2015 Barcelona, Spain |
The 2015 Anti-Phishing Working Group (APWG) Symposium on Electronic Crime Research was held 26-29 May in Barcelona, Spain. The conference focused on a range of topics, many of interest to the Science of Security community. Citations were recovered in July 2015. The conference web site is available at: https://apwg.org/apwg-events/ecrime2015/
Zheng Dong; Kapadia, A.; Blythe, J.; Camp, L.J., "Beyond The Lock Icon: Real-Time Detection Of Phishing Websites Using Public Key Certificates," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 12, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120795
Abstract: We propose a machine-learning approach to detect phishing websites using features from their X.509 public key certificates. We show that its efficacy extends beyond HTTPS-enabled sites. Our solution enables immediate local identification of phishing sites. As such, this serves as an important complement to the existing server-based anti-phishing mechanisms which predominately use blacklists. Blacklisting suffers from several inherent drawbacks in terms of correctness, timeliness, and completeness. Due to the potentially significant lag prior to site blacklisting, there is a window of opportunity for attackers. Other local client-side phishing detection approaches also exist, but primarily rely on page content or URLs, which are arguably easier to manipulate by attackers. We illustrate that our certificate-based approach greatly increases the difficulty of masquerading undetected for phishers, with single millisecond delays for users. We further show that this approach works not only against HTTPS-enabled phishing attacks, but also detects HTTP phishing attacks with port 443 enabled.
Keywords: Web sites; computer crime; learning (artificial intelligence);public key cryptography; HTTPS-enabled phishing attack; Web site phishing detection; machine-learning approach from; public key certificate; server-based antiphishing mechanism; site blacklisting; Browsers; Electronic mail; Feature extraction; Public key; Servers; Uniform resource locators; certificates; machine learning; security (ID#: 15-6294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120795&isnumber=7120794
de los Santos, S.; Guzman, A.; Alonso, C.; Gomez Rodriguez, F., "Chasing Shuabang In Apps Stores," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 9, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120796
Abstract: There are well-known attack techniques that threaten current apps stores. However, the complexity of these environments and their high rate of variability have prevented any effective analysis aimed at mitigating the effects of these threats. In this paper, the analysis performed over one of these techniques, Shuabang, is introduced. The completion of this analysis has been supported by a new tool that facilitates the correlation of large amounts of information from different apps stores.
Keywords: mobile computing; security of data; Shuabang; application stores; attack techniques; information correlation; threat analysis; threat mitigation; Correlation; Databases; Google; Mobile communication; Performance evaluation; Servers; Smart phones (ID#: 15-6295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120796&isnumber=7120794
Spring, J.; Kern, S.; Summers, A., "Global Adversarial Capability Modeling," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 21, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120797
Abstract: Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Keywords: Android (operating system); application program interfaces; computer network security; risk analysis; ACC; Android API; Apache; Windows XP; adversarial capability chain; attack likelihood prediction; compromised industrial control systems; computer network defense; cyber risk analysis; evidence-based model; global adversarial capability modeling; Analytical models; Androids; Biological system modeling; Computational modeling; Humanoid robots; Integrated circuit modeling; Software systems; CND; computer network defense; cybersecurity; incident response; intelligence; intrusion detection; modeling; security (ID#: 15-6296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120797&isnumber=7120794
Johnson, R.; Kiourtis, N.; Stavrou, A.; Sritapan, V., "Analysis Of Content Copyright Infringement In Mobile Application Markets," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 10, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120798
Abstract: As mobile devices increasingly become bigger in terms of display and reliable in delivering paid entertainment and video content, we also see a rise in the presence of mobile applications that attempt to profit by streaming pirated content to unsuspected end-users. These applications are both paid and free and in the case of free applications, the source of funding appears to be advertisements that are displayed while the content is streamed to the device. In this paper, we assess the extent of content copyright infringement for mobile markets that span multiple platforms (iOS, Android, and Windows Mobile) and cover both official and unofficial mobile markets located across the world. Using a set of search keywords that point to titles of paid streaming content, we discovered 8,592 Android, 5,550 iOS, and 3,910 Windows mobile applications that matched our search criteria. Out of those applications, hundreds had links to either locally or remotely stored pirated content and were not developed, endorsed, or, in many cases, known to the owners of the copyrighted contents. We also revealed the network locations of 856,717 Uniform Resource Locators (URLs) pointing to back-end servers and cyber-lockers used to communicate the pirated content to the mobile application.
Keywords: copyright; mobile computing; Android; URL; Uniform Resource Locators; Windows mobile applications; back-end servers; content copyright infringement; cyber-lockers; iOS; mobile application markets; mobile devices; network locations; paid entertainment; paid streaming content; pirated content streaming; search criteria; search keywords; unofficial mobile markets; video content; Androids; Humanoid robots; Java; Mobile communication; Mobile handsets; Servers; Writing (ID#: 15-6297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120798&isnumber=7120794
Warner, G.; Rajani, D.; Nagy, M., "Spammer Success Through Customization and Randomization of URLs," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 6, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120799
Abstract: Spam researchers and security personnel require a method for determining whether the URLs embedded in email messages are safe or potentially hostile. Prior research has been focused on spam collections that are quite insignificant compared to real-world spam volumes. In this paper, researchers evaluate 464 million URLs representing nearly 1 million unique domains observed in email messages in a six day period from November 2014. Four methods of customization and randomization of URLs believed to be used by spammers to attempt to increase deliverability of their URLs are explored: domain diversity, hostname wild-carding, path uniqueness, and attribute uniqueness. Implications of the findings suggest improvements for “URL blacklist” methods, methods of sampling to decrease the number of URLs that must be reviewed for safety, as well as presenting some challenges to the ICANN, Registrar, and Email Safety communities.
Keywords: computer crime; unsolicited e-mail; Email Safety communities; ICANN communities; Registrar communities; URL blacklist methods; URL customization; URL deliverability; URL randomization; attribute uniqueness; domain diversity; email messages; hostname; malicious email; path uniqueness; real-world spam volumes; sampling methods; spam collections; spammer; wild-carding; Personnel; Pharmaceuticals; Safety; Security; Uniform resource locators; Unsolicited electronic mail; URL evaluation; domain registration; malicious email; spam (ID#: 15-6298)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120799&isnumber=7120794
Garg, V.; Camp, L.J., "Spare The Rod, Spoil The Network Security? Economic Analysis Of Sanctions Online," Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1, 10, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120800
Abstract: When and how should we encourage network providers to mitigate the harm of security and privacy risks? Poorly designed interventions that do not align with economic incentives can lead stakeholders to be less, rather than more, careful. We apply an economic framework that compares two fundamental regulatory approaches: risk based or ex ante and harm based or ex post. We posit that for well known security risks, such as botnets, ex ante sanctions are economically efficient. Systematic best practices, e.g. patching, can reduce the risk of becoming a bot and thus can be implemented ex ante. Conversely risks, which are contextual, poorly understood, and new, and where distribution of harm is difficult to estimate, should incur ex post sanctions, e.g. information disclosure. Privacy preferences and potential harm vary widely across domains; thus, post-hoc consideration of harm is more appropriate for privacy risks. We examine two current policy and enforcement efforts, i.e. Do Not Track and botnet takedowns, under the ex ante vs. ex post framework. We argue that these efforts may worsen security and privacy outcomes, as they distort market forces, reduce competition, or create artificial monopolies. Finally, we address the overlap between security and privacy risks.
Keywords: computer network security; data privacy; invasive software; risk management; Do Not Track approach; botnet takedowns; botnets; economic incentives; ex-ante sanction approach; ex-post sanction approach; fundamental regulatory approaches; harm based approach; information disclosure; network security; online sanction economic analysis; patching method; privacy risks; risk reduction; risk-based approach; security risks; Biological system modeling; Companies; Economics; Google; Government; Privacy; Security (ID#: 15-6299)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120800&isnumber=7120794
Moore, T.; Clayton, R., "Which Malware Lures Work Best? Measurements From A Large Instant Messaging Worm," Electronic Crime Research (eCrime), 2015 APWG Symposium on , vol., no., pp.110,, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120801
Abstract: Users are inveigled into visiting a malicious website in a phishing or malware-distribution scam through the use of a `lure' - a superficially valid reason for their interest. We examine real world data from some `worms' that spread over the social graph of Instant Messenger users. We find that over 14 million distinct users clicked on these lures over a two year period from Spring 2010. Furthermore, we present evidence that 95% of users who clicked on the lures became infected with malware. In one four week period spanning May-June 2010, near the worm's peak, we estimate that at least 1.67 million users were infected. We measure the extent to which small variations in lure URLs and the short pieces of text that accompany these URLs affects the likelihood of users clicking on the malicious URL. We show that the hostnames containing recognizable brand names were more effective than the terse random strings employed by URL shortening systems; and that brief Portuguese phrases were more effective in luring in Brazilians than more generic `language independent' text.
Keywords: Web sites; computer crime; electronic messaging; invasive software; natural language processing; text analysis; Portuguese phrases; Spring 2010;URL shortening systems; brand names; generic language independent text; instant messaging worm; lure URL; malicious URL; malicious Website; malware-distribution scam; phishing; social graph; terse random strings; time 4 week; Facebook; Grippers; IP networks; Malware; Monitoring; Servers; Uniform resource locators (ID#: 15-6300)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120801&isnumber=7120794
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Information Hiding and Multimedia Security, 2015 Portland, Oregon |
The 3rd ACM Workshop on Information Hiding and Multimedia Security (IH & MMSec) was held June 17-19, 2015 in Portland, Oregon. The workshop focused on information hiding topics such as watermarking, steganography, steganalysis, anonymity, privacy, hard-to-intercept communications, and covert/subliminal channels, and on a variety of multimedia security topics including multimedia identification, biometrics, video surveillance, multimedia forensics, and computer and network security. The papers presented are cited here. The conference web site is available at: http://www.ihmmsec.org/
Sebastian Matthias Burg, Dustin Peterson, Oliver Bringmann. “End-to-Display Encryption: A Pixel-Domain Encryption with Security Benefit.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 123-128. Doi: 10.1145/2756601.2756613
Abstract: Providing secure access to confidential information is extremely difficult, notably when regarding weak endpoints and users. With the increasing number of corporate espionage cases and data leaks, a usable approach enhancing the security of data on endpoints is needed. In this paper we present our implementation for providing a new level of security for confidential documents that are viewed on a display. We call this End-to-Display Encryption (E2DE). E2DE encrypts images in the pixel-domain before transmitting them to the user. These images can then be displayed by arbitrary image viewers and are sent to the display. On the way to the display, the data stream is analyzed and the encrypted pixels are decrypted depending on a private key stored on a chip card inserted in the receiver, creating a viewable representation of the confidential data on the display, without decrypting the information on the computer itself. We implemented a prototype on a Digilent Atlys FPGA Board supporting resolutions up to Full HD.
Keywords: encryption, multimedia, physical security, security (ID#: 15-6381)
URL: http://doi.acm.org/10.1145/2756601.2756613
Adi Hajj-Ahmad, Séverine Baudry, Bertrand Chupeau, Gwenaël Doërr. “Flicker Forensics for Pirate Device Identification.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 75-84. Doi: 10.1145/2756601.2756612
Abstract: Cryptography-based content protection is an efficient means to protect multimedia content during transport. Nevertheless, content is eventually decrypted at rendering time, leaving it vulnerable to piracy e.g. using a camcorder to record movies displayed on an LCD screen. Such type of piracy naturally imprints a visible flicker signal in the pirate video due to the interplay between the rendering and acquisition devices. The parameters of such flicker are inherently tied to the characteristics of the pirate devices such as the back-light of the LCD screen and the read-out time of the camcorder. In this article, we introduce a forensic methodology to estimate such parameters by analyzing the flicker signal present in pirate recordings. Experimental results clearly showcase that the accuracy of these estimation techniques offers efficient means to tell-tale which devices have been used for piracy thanks to the variety of factory settings used by consumer electronics manufacturers.
Keywords: LCD screen, back-light, camcorder, flicker, passive forensics, piracy, read-out time, rolling shutter (ID#: 15-6382)
URL: http://doi.acm.org/10.1145/2756601.2756612
Tomáš Denemark, Jessica Fridrich. “Improving Steganographic Security by Synchronizing the Selection Channel.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 5-14. Doi: 10.1145/2756601.2756620
Abstract: This paper describes a general method for increasing the security of additive steganographic schemes for digital images represented in the spatial domain. Additive embedding schemes first assign costs to individual pixels and then embed the desired payload by minimizing the sum of costs of all changed pixels. The proposed framework can be applied to any such scheme -- it starts with the cost assignment and forms a non-additive distortion function that forces adjacent embedding changes to synchronize. Since the distortion function is purposely designed as a sum of locally supported potentials, one can use the Gibbs construction to realize the embedding in practice. The beneficial impact of synchronizing the embedding changes is linked to the fact that modern steganalysis detectors use higher-order statistics of noise residuals obtained by filters with sign-changing kernels and to the fundamental difficulty of accurately estimating the selection channel of a non-additive embedding scheme implemented with several Gibbs sweeps. Both decrease the accuracy of detectors built using rich media models, including their selection-channel-aware versions.
Keywords: Gibbs construction, non-additive distortion, security, selection channel, steganography, synchronization (ID#: 15-6383)
URL: http://doi.acm.org/10.1145/2756601.2756620
Christian Arndt, Stefan Kiltz, Jana Dittmann, Robert Fischer. “ForeMan, a Versatile and Extensible Database System for Digitized Forensics Based on Benchmarking Properties.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 91-96. Doi: 10.1145/2756601.2756615
Abstract: To benefit from new opportunities offered by the digitalization of forensic disciplines, the challenges especially w.r.t. comprehensibility and searchability have to be met. Important tools in this forensic process are databases containing digitized representations of physical crime scene traces. We present ForeMan, an extensible database system for digitized forensics handling separate databases and enabling intra and inter trace type searches. It now contains 762 fiber data sets and 27 fingerprint data sets (anonymized time series). Requirements of the digitized forensic process model are mapped to design aspects and conceptually modeled around benchmarking properties. A fiber categorization scheme is used to structure fiber data according to forensic use case identification. Our research extends the benchmarking properties by fiber fold shape derived from the application field of fibers (part of micro traces) and sequence number derived from the application field of time series analysis for fingerprint aging research. We identify matching data subsets from both digitized trace types and introduce the terms of entity-centered and spatial-centered information. We show how combining two types of digitized crime scene traces (fiber and fingerprint data) can give new insights for research and casework and discuss requirements for other trace types such as firearm and toolmarks.
Keywords: benchmarking properties, digitized crime scene forensics, forensic trace database (ID#: 15-6384)
URL: http://doi.acm.org/10.1145/2756601.2756615
Vahid Sedighi, Jessica Fridrich. “Effect of Imprecise Knowledge of the Selection Channel on Steganalysis.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 33-42. Doi: 10.1145/2756601.2756621
Abstract: It has recently been shown that steganalysis of content-adaptive steganography can be improved when the Warden incorporates in her detector the knowledge of the selection channel -- the probabilities with which the individual cover elements were modified during embedding. Such attacks implicitly assume that the Warden knows at least approximately the payload size. In this paper, we study the loss of detection accuracy when the Warden uses a selection channel that was imprecisely determined either due to lack of information or the stego changes themselves. The loss is investigated for two types of qualitatively different detectors -- binary classifiers equipped with selection-channel-aware rich models and optimal detectors derived using the theory of hypothesis testing from a cover model. Two different embedding paradigms are addressed -- steganography based on minimizing distortion and embedding that minimizes the detectability of an optimal detector within a chosen cover model. Remarkably, the experimental and theoretical evidence are qualitatively in agreement across different embedding methods, and both point out that inaccuracies in the selection channel do not have a strong effect on steganalysis detection errors. It pays off to use imprecise selection channel rather than none. Our findings validate the use of selection-channel-aware detectors in practice.
Keywords: adaptive, selection channel, steganalysis, steganography (ID#: 15-6385)
URL: http://doi.acm.org/10.1145/2756601.2756621
Jong-Uk Hou, Do-Gon Kim, Sunghee Choi, Heung-Kyu Lee. “3D Print-Scan Resilient Watermarking Using a Histogram-Based Circular Shift Coding Structure.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 115-121. Doi: 10.1145/2756601.2756607
Abstract: 3D printing content is a new form of content being distributed in digital as well as analog domains. Therefore, its security is the biggest technical challenge of the content distribution service. In this paper, we analyze the 3D print-scan process, and we organize possible distortions according to the processes with respect to 3D mesh watermarking. Based on the analysis, we propose a circular shift coding structure for the 3D model. When the rotating disks of the coding structure are aligned in parallel to the layers of the 3D printing, the structure preserves a statistical feature of each disk from the layer dividing process. Based on the circular shift coding structure, we achieve a 3D print-scan resilient watermarking scheme. In experimental tests, the proposed scheme is robust against such signal processing, and cropping attacks. Furthermore, the embedded information is not lost after 3D print-scan process.
Keywords: 3D mesh model, 3D printer, digital watermarking, robust watermarking, stair-stepping effect (ID#: 15-6386)
URL: http://doi.acm.org/10.1145/2756601.2756607
Brent C. Carrara, Carlisle Adams. “On Characterizing and Measuring Out-of-Band Covert Channels.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 43-54. Doi: 10.1145/2756601.2756604
Abstract: A methodology for characterizing and measuring out-of-band covert channels (OOB-CCs) is proposed and used to evaluate covert-acoustic channels (i.e., covert channels established using speakers and microphones). OOB-CCs are low-probability of detection/low-probability of interception channels established using commodity devices that are not traditionally used for communication (e.g., speaker and microphone, display and FM radio, etc.). To date, OOB-CCs have been declared "covert" if the signals used to establish these channels could not be perceived by a human adversary. This work examines OOB-CCs from the perspective of a passive adversary and argues that a different methodology is required in order to effectively assess OOB-CCs. Traditional communication systems are measured by their capacity and bit error rate; while important parameters, they do not capture the key measures of OOB-CCs: namely, the probability of an adversary detecting the channel and the amount of data that two covertly communicating parties can exchange without being detected. As a result, the adoption of the measure steganographic capacity is proposed and used to measure the amount of data (in bits) that can be transferred through an OOB-CC before a passive adversary's probability of detecting the channel reaches a given threshold. The theoretical steganographic capacity for discrete memoryless channels as well as additive white Gaussian noise channels is calculated in this paper and a case study is performed to measure the steganographic capacity of OOB covert-acoustic channels, when a passive adversary uses an energy detector to detect the covert communication. The case study reveals the conditions under which the covertly communicating parties can achieve perfect steganography (i.e., conditions under which data can be communicated without risk of detection).
Keywords: covert channels, covert-acoustic channels, information hiding, malware communication, out-of-band covert channels, steganographic capacity (ID#: 15-6387)
URL: http://doi.acm.org/10.1145/2756601.2756604
Xiaofeng Song, Fenlin Liu, Chunfang Yang, Xiangyang Luo, Yi Zhang. “Steganalysis of Adaptive JPEG Steganography Using 2D Gabor Filters.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 15-23. Doi: 10.1145/2756601.2756608
Abstract: Adaptive JPEG steganographic schemes are difficult to preserve the image texture features in all scales and orientations when the embedding changes are constrained to the complicated texture regions, then a steganalysis feature extraction method is proposed based on 2 dimensional (2D) Gabor filters. The 2D Gabor filters have certain optimal joint localization properties in the spatial domain and in the spatial frequency domain. They can describe the image texture features from different scales and orientations, therefore the changes of image statistical characteristics caused by steganography embedding can be captured more effectively. For the proposed feature extraction method, the decompressed JPEG image is filtered by 2D Gabor filters with different scales and orientations firstly. Then, the histogram features are extracted from all the filtered images. Lastly, the ensemble classifier is used to assemble the proposed steganalysis feature as well as the final steganalyzer. The experimental results show that the proposed steganalysis feature can achieve a competitive performance by comparing with the other steganalysis features when they are used for the detection performance of adaptive JPEG steganography such as UED, JUNIWARD and SI-UNIWARD.
Keywords: algorithms, design, security (ID#: 15-6388)
URL: http://doi.acm.org/10.1145/2756601.2756608
Yao Shen, Liusheng Huang, Fei Wang, Xiaorong Lu, Wei Yang, Lu Li. “LiHB: Lost in HTTP Behaviors - A Behavior-Based Covert Channel in HTTP.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 55-64. Doi: 10.1145/2756601.2756605
Abstract: The application-layer covert channels have been extensively studied in recent years. Information-hiding in ubiquitous application packets can significantly improve the capacity of covert channels. However, the undetectability is still a knotty problem, because the existing covert channels are all frustrated by proper detection schemes. In this paper, we propose LiHB, a behavior-based covert channel in HTTP. When a client is browsing a website and downloading webpage objects, we can reveal some fluctuation behaviors that the distribution relationship between the ports opening and HTTP requests are flexible. Based on combinatorial nature of distributing N HTTP requests over M HTTP flows, such fluctuation can be exploited by LiHB channel to encode covert messages, which can obtain high stealthiness. Besides, LiHB achieves a considerable and controllable capacity by setting the number of webpage objects and HTTP flows. Compared with existing techniques, LiHB is the first covert channel implemented based on the unsuspicious behavior of browsers, the most important application-layer software. Because most HTTP proxies are using NAPT techniques, LiHB can also operate well even when a proxy is equipped, which poses a serious threat to individual privacy. Experimental results show that LiHB covert channel achieves a good capacity, reliability and high undetectability.
Keywords: application layer, browser, combinatorics, covert channels, http behaviors, proxy (ID#: 15-6389)
URL: http://doi.acm.org/10.1145/2756601.2756605
Yun Cao, Hong Zhang, Xianfeng Zhao, Haibo Yu. “Video Steganography Based on Optimized Motion Estimation Perturbation.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 25-31. Doi: 10.1145/2756601.2756609
Abstract: In this paper, a novel motion vector-based video steganographic scheme is proposed, which is capable of withstanding the current best statistical detection method. With this scheme, secret message bits are embedded into motion vector (MV) values by slightly perturbing their motion estimation (ME) processes. In general, two measures are taken for steganographic security (statistical undetectability) enhancement. First, the ME perturbations are optimized ensuring the modified MVs are still local optimal, which essentially makes targeted detectors ineffective. Secondly, to minimize the overall embedding impact under a given relative payload, a double-layered coding structure is used to control the ME perturbations. Experimental results demonstrate that the proposed scheme achieves a much higher level of security compared with other existing MV-based approaches. Meanwhile, the reconstructed visual quality and the coding efficiency are slightly affected as well.
Keywords: H.264/AVC, information hiding, motion estimation, steganography, video (ID#: 15-6390)
URL: http://doi.acm.org/10.1145/2756601.2756609
Charles V. Wright, Wu-chi Feng, Feng Liu. “Thumbnail-Preserving Encryption for JPEG.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 141-146. Doi: 10.1145/2756601.2756618
Abstract: With more and more data being stored in the cloud, securing multimedia data is becoming increasingly important. Use of existing encryption methods with cloud services is possible, but makes many web-based applications difficult or impossible to use. In this paper, we propose a new image encryption scheme specially designed to protect JPEG images in cloud photo storage services. Our technique allows efficient reconstruction of an accurate low-resolution thumbnail from the ciphertext image, but aims to prevent the extraction of any more detailed information. This will allow efficient storage and retrieval of image data in the cloud but protect its contents from outside hackers or snooping cloud administrators. Experiments of the proposed approach using an online selfie database show that it can achieve a good balance of privacy, utility, image quality, and file size.
Keywords: image security, multimedia encryption, privacy (ID#: 15-6391)
URL: http://doi.acm.org/10.1145/2756601.2756618
Eun-Kyung Ryu, Dae-Soo Kim, Kee-Young Yoo. “On Elliptic Curve Based Untraceable RFID Authentication Protocols.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 147-153. Doi: 10.1145/2756601.2756610
Abstract: An untraceable RFID authentication scheme allows a legitimate reader to authenticate a tag, and at the same time it assures the privacy of the tag against unauthorized tracing. In this paper, we revisit three elliptic-curve based untraceable RFID authentication protocols recently published and show they are not secure against active attacks and do not support the untraceability for tags. We also provide a new construction to solve such problems using the elliptic-curved based Schnorr signature technique. Our construction satisfies all requirements for RFID security and privacy including replay protection, impersonation resistance, untraceability, and forward privacy. It requires only two point scalar multiplications and two hash operations with two messages exchanges. Compared to previous works, our construction has better security and efficiency.
Keywords: ECC, RFID, authentication, privacy, untraceability (ID#: 15-6392)
URL: http://doi.acm.org/10.1145/2756601.2756610
Lakshmanan Nataraj, S. Karthikeyan, B.S. Manjunath. “SATTVA: SpArsiTy inspired classificaTion of malware VAriants.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 135-140. Doi: 10.1145/2756601.2756616
Abstract: There is an alarming increase in the amount of malware that is generated today. However, several studies have shown that most of these new malware are just variants of existing ones. Fast detection of these variants plays an effective role in thwarting new attacks. In this paper, we propose a novel approach to detect malware variants using a sparse representation framework. Exploiting the fact that most malware variants have small differences in their structure, we model a new/unknown malware sample as a sparse linear combination of other malware in the training set. The class with the least residual error is assigned to the unknown malware. Experiments on two standard malware datasets, Malheur dataset and Malimg dataset, show that our method outperforms current state of the art approaches and achieves a classification accuracy of 98.55\% and 92.83\% respectively. Further, by using a confidence measure to reject outliers, we obtain 100\% accuracy on both datasets, at the expense of throwing away a small percentage of outliers. Finally, we evaluate our technique on two large scale malware datasets: Offensive Computing dataset (2,124 classes, 42,480 malware) and Anubis dataset (209 classes, 36,784 samples). On both datasets our method obtained an average classification accuracy of 77\%, thus making it applicable to real world malware classification.
Keywords: sparsity based classification, compressed sensing, malware variant classification, random projections (ID#: 15-6393)
URL: http://doi.acm.org/10.1145/2756601.2756616
Ji Won Yoon, Hyoungshick Kim, Hyun-Ju Jo, Hyelim Lee, Kwangsu Lee. “Visual Honey Encryption: Application to Steganography.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 65-74. Doi: 10.1145/2756601.2756606
Abstract: Honey encryption (HE) is a new technique to overcome the weakness of conventional password-based encryption (PBE). However, conventional honey encryption still has the limitation that it works only for binary bit streams or integer sequences because it uses a fixed distribution-transforming encoder (DTE). In this paper, we propose a variant of honey encryption called visual honey encryption which employs an adaptive DTE in a Bayesian framework so that the proposed approach can be applied to more complex domains including images and videos. We applied this method to create a new steganography scheme which significantly improves the security level of traditional steganography.
Keywords: honey encryption, multimedia, steganography (ID#: 15-6394)
URL: http://doi.acm.org/10.1145/2756601.2756606
William F. Bond, Ahmed Awad E.A. “Touch-based Static Authentication Using a Virtual Grid.” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 129-134. Doi: 10.1145/2756601.2756602
Abstract: Keystroke dynamics is a subfield of computer security in which the cadence of the typist's keystrokes are used to determine authenticity. The static variety of keystroke dynamics uses typing patterns observed during the typing of a password or passphrase. This paper presents a technique for static authentication on mobile tablet devices using neural networks for analysis of keystroke metrics. Metrics used in the analysis of typing are monographs, digraphs, and trigraphs. Monographs as we define them consist of the time between the press and release of a single key, coupled with the discretized x-y location of the keystroke on the tablet. A digraph is the duration between the presses of two consecutively pressed keys, and a trigraph is the duration between the press of a key and the press of a key two keys later. Our technique combines the analysis of monographs, digraphs, and trigraphs to produce a confidence measure. Our best equal error rate for distinguishing users from impostors is 9.3% for text typing, and 9.0% for a custom experiment setup that is discussed in detail in the paper.
Keywords: Bayesian fusion, back-propagation neural networks, digraphs, discretization, keystroke dynamics, mobile authentication, monographs, receiver operating characteristic curve, static authentication, trigraphs (ID#: 15-6395)
URL: http://doi.acm.org/10.1145/2756601.2756602
David Aucsmith. “Implications of Cyber Warfare;. IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 1-1. Doi: 10.1145/2756601.2756622
Abstract: Freedom of operation in cyberspace has become an object of contestation between nation states. Cyber warfare is emerging as a realistic threat. This talk will explore the implications of the development of cyberspace as a domain of warfare and how military theory developed for the other domains of war may be applicable to cyberspace. Far from being a completely different domain, the talk will demonstrate that cyberspace is simply an obvious evolution in conflict theory.
Keywords: conflict theory, cyber warfare, military theory (ID#: 15-6396)
URL: http://doi.acm.org/10.1145/2756601.2756622
Richard Chow. “IoT Privacy: Can We Regain Control?” IH&MMSec '15 Proceedings of the 3rd ACM Workshop on Information Hiding and Multimedia Security, June 2015, Pages 3-3. Doi: 10.1145/2756601.2756623
Abstract: Privacy is part of the Internet of Things (IoT) discussion because of the increased potential for sensitive data collection. In the vision for IoT, sensors penetrate ubiquitously into our physical lives and are funneled into big data systems for analysis. IoT data allows new benefits to end users - but also allows new inferences that erode privacy. The usual privacy mechanisms employed by users no longer work in the context of IoT. Users can no longer turn off a service (e.g., GPS), nor can they even turn off a device and expect to be safe from tracking. IoT means the monitoring and data collection is continuing even in the physical world. On a computer, we have at least a semblance of control and can in principle determine what applications are running and what data they are collecting. For example, on a traditional computer, we do have malware defenses - even if imperfect. Such defenses are strikingly absent for IoT, and it is unclear how traditional defenses can be applied to IoT. The issue of control is the main privacy problem in the context of IoT. Users generally don't know about all the sensors in the environment (with the potential exception of sensors in the user's own home). Present-day examples are WiFi MAC trackers and Google Glass, of course, but systems in the future will become even less discernible. In one sense, this is a security problem - detecting malicious devices or "environmental malware." But it is also a privacy problem - many sensor devices in fact want to be transparent to users (for instance, by adopting a traditional notice-and-consent model), but are blocked by the lack of a natural communication channel to the user. Even assuming communication mechanisms, we have complex usability problems. For instance, we need to understand what sensors a person might be worried about and in what contexts. Audio capture at home is different from audio capture in a lecture hall. What processing is done on the sensor data may also be important. A camera capturing video for purposes of gesture recognition may be less worrisome than for purposes of facial recognition (and, of course, the user needs assurance on the proclaimed processing). Finally, given the large number of "things", the problem of notice fatigue must be dealt with, or notifications will become no more useful than browser security warnings. In this talk, we discuss all these problems in detail, together with potential solutions.
Keywords: (not provided) (ID#: 15-6397)
URL: http://doi.acm.org/10.1145/2756601.2756623
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) Singapore |
The IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) was held April 7-9, 2015 in Singapore. While the research presented covers many aspects of cyber-physical systems, much of it has specific implications for the hard problems in the Science of Security, particularly resiliency. These works are cited here. Citations were recovered on July 2, 2015.
Nigussie, E.; Teng Xu; Potkonjak, M., "Securing Wireless Body Sensor Networks Using Bijective Function-Based Hardware Primitive," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106907
Abstract: We present a novel lightweight hardware security primitive for wireless body sensor networks (WBSNs). Security of WBSNs is crucial and the security solution must be lightweight due to resource constraints in the body senor nodes. The presented security primitive is based on digital implementation of bidirectional bijective function. The one-to-one input-output mapping of the function is realized using a network of lookup tables (LUTs). The bidirectionality of the function enables implementation of security protocols with lower overheads. The configuration of the interstage interconnection between the LUTs serves as the shared secret key. Authentication, encryption/decryption and message integrity protocols are formulated using the proposed security primitive. NIST randomness benchmark suite is applied to this security primitive and it passes all the tests. It also achieves higher throughput and requires less area than AES-CCM.
Keywords: body sensor networks; cryptographic protocols; table lookup; telecommunication security; wireless sensor networks; LUT; WBSN security; bidirectional bijective function; bijective function; body senor nodes; digital implementation; encryption-decryption; hardware primitive; lightweight hardware security primitive; lookup tables; message integrity protocols; one-to-one input-output mapping; resource constraints; securing wireless body sensor networks; security protocols; Authentication; Encryption; Protocols; Radiation detectors; Receivers; Table lookup (ID#: 15-6325)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106907&isnumber=7106892
Hoang Giang Do; Wee Keong Ng, "Privacy-Preserving Approach for Sharing And Processing Intrusion Alert Data," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106911
Abstract: Amplified and disrupting cyber-attacks might lead to severe security incidents with drastic consequences such as large property damage, sensitive information breach, or even disruption of the national economy. While traditional intrusion detection and prevention system might successfully detect low or moderate levels of attack, the cooperation among different organizations is necessary to defend against multi-stage and large-scale cyber-attacks. Correlating intrusion alerts from a shared database of multiple sources provides security analysts with succinct and high-level patterns of cyber-attacks - a powerful tool to combat with sophisticate attacks. However, sharing intrusion alert data raises a significant privacy concern among data holders, since publishing this information means a risk of exposing other sensitive information such as intranet topology, network services, and the security infrastructure. This paper discusses possible cryptographic approaches to tackle this issue. Organizers can encrypt their intrusion alert data to protect data confidentiality and outsource them to a shared server to reduce the cost of storage and maintenance, while, at the same time, benefit from a larger source of information for alert correlation process. Two privacy preserving alert correlation techniques are proposed under semi-honest model. These methods are based on attribute similarity and prerequisite/consequence conditions of cyber-attacks.
Keywords: cryptography; data privacy; intranets; cryptographic approach; cyber-attacks; intranet topology; intrusion alert data processing; intrusion alert data sharing; large-scale cyber-attacks; network services; privacy-preserving approach; security infrastructure; Encryption; Sensors (ID#: 15-6326)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106911&isnumber=7106892
Silva, R.; Sa Silva, J.; Boavida, F., "A Symbiotic Resources Sharing IoT Platform in the Smart Cities Context," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106922
Abstract: Large urban areas are nowadays covered by millions of wireless devices, including not only cellular equipment carried by their inhabitants, but also several ubiquitous and pervasive platforms used to monitor and/or actuate on a variety of phenomena in the city area. Whereas the former are increasingly powerful devices equipped with advanced processors, large memory capacity, high bandwidth, and several wireless interfaces, the latter are typically resource constrained systems. Despite their differences, both kinds of systems share the same ecosystem, and therefore, it is possible to build symbiotic relationships between them. Our research aims at creating a resource-sharing platform to support such relationships, in the perspective that resource unconstrained devices can assist constrained ones, while the latter can extend the features of the former. Resource sharing between heterogeneous networks in an urban area poses several challenges, not only from a technical point of view, but also from a social perspective. In this paper we present our symbiotic resource-sharing proposal while discussing its impact on networks and citizens.
Keywords: Internet of Things; mobile computing; resource allocation; smart cities; heterogeneous networks; mobile devices; pervasive platform; resource constrained systems; resource unconstrained devices; smart cities context; social perspective; symbiotic relationships; symbiotic resources sharing IoT platform; ubiquitous platform; wireless devices; Cities and towns; Mobile communication; Mobile handsets; Security; Symbiosis; Wireless communication; Wireless sensor networks (ID#: 15-6327)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106922&isnumber=7106892
Alohali, B.A.; Vassialkis, V.G., "Secure And Energy-Efficient Multicast Routing in Smart Grids," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106929
Abstract: A smart grid is a power system that uses information and communication technology to operate, monitor, and control data flows between the power generating source and the end user. It aims at high efficiency, reliability, and sustainability of the electricity supply process that is provided by the utility centre and is distributed from generation stations to clients. To this end, energy-efficient multicast communication is an important requirement to serve a group of residents in a neighbourhood. However, the multicast routing introduces new challenges in terms of secure operation of the smart grid and user privacy. In this paper, after having analysed the security threats for multicast-enabled smart grids, we propose a novel multicast routing protocol that is both sufficiently secure and energy efficient. We also evaluate the performance of the proposed protocol by means of computer simulations, in terms of its energy-efficient operation.
Keywords: data flow computing; data privacy; multicast protocols; power system analysis computing; power system reliability; routing protocols; smart power grids; telecommunication security; data flow control; data flow monitoring; data flow operation; electricity supply high efficiency; electricity supply reliability; electricity supply sustainability; end user; energy-efficient multicast communication; energy-efficient multicast routing; generation stations; information-and-communication technology; multicast-enabled smart grids; power generating source; power system ;secure multicast routing protocol; security threats; user privacy; utility centre; Authentication; Protocols; Public key; Routing; Smart meters; Multicast; Secure Routing; Smart Grid (ID#: 15-6328)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106929&isnumber=7106892
Saleh, M.; El-Meniawy, N.; Sourour, E., "Routing-Guided Authentication in Wireless Sensor Networks," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106939
Abstract: Entity authentication is a crucial security objective since it enables network nodes to verify the identity of each other. Wireless Sensor Networks (WSNs) are composed of a large number of possibly mobile nodes, which are limited in computational, storage and energy resources. These characteristics pose a challenge to entity authentication protocols and security in general. We propose an authentication protocol whose execution is integrated within routing. This is in contrast to currently proposed protocols, in which a node tries to authenticate itself to other nodes without an explicit tie to the underlying routing protocol. In our protocol, nodes discover shared keys, authenticate themselves to each other and build routing paths all in a synergistic way.
Keywords: cryptographic protocols; mobile radio; routing protocols; wireless sensor networks; WSN routing protocol; entity authentication protocol; wireless sensor network mobile node; Ad hoc networks; Cryptography; Media Access Protocol; Mobile computing; Wireless sensor networks (ID#: 15-6329)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106939&isnumber=7106892
Bose, T.; Bandyopadhyay, S.; Ukil, A.; Bhattacharyya, A.; Pal, A., "Why Not Keep Your Personal Data Secure Yet Private in IoT?: Our Lightweight Approach," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106942
Abstract: IoT (Internet of Things) systems are resource-constrained and primarily depend on sensors for contextual, physiological and behavioral information. Sensitive nature of sensor data incurs high probability of privacy breaching risk due to intended or malicious disclosure. Uncertainty about privacy cost while sharing sensitive sensor data through Internet would mostly result in overprovisioning of security mechanisms and it is detrimental for IoT scalability. In this paper, we propose a novel method of optimizing the need for IoT security enablement, which is based on the estimated privacy risk of shareable sensor data. Particularly, our scheme serves two objectives, viz. privacy risk assessment and optimizing the secure transmission based on that assessment. The challenges are, firstly, to determine the degree of privacy, and evaluate a privacy score from the fine-grained sensor data and, secondly, to preserve the privacy content through secure transfer of the data, adapted based on the measured privacy score. We further meet this objective by introducing and adapting a lightweight scheme for secure channel establishment between the sensing device and the data collection unit/ backend application embedded within CoAP (Constrained Application Protocol), a candidate IoT application protocol and using UDP as a transport. We consider smart energy management, a killer IoT application, as the use-case where smart energy meter data contains private information about the residents. Our results with real household smart meter data demonstrate the efficacy of our scheme.
Keywords: Internet; Internet of Things; data privacy; energy management systems; risk management; security of data; transport protocols; CoAP; Internet; Internet of Things systems; UDP; behavioral information; constrained application protocol; contextual information; data collection unit; fine-grained sensor data; loT scalability; loT security enablement; malicious disclosure; personal data privacy; personal data security; physiological information; privacy breaching risk; privacy risk assessment; resource-constrained loT systems; shareable sensor data; smart energy management; Encryption; IP networks; Optimization; Physiology; Privacy; Sensitivity; CoAP; IoT; Lightweight; Privacy; Security; Smart meter (ID#: 15-6330)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106942&isnumber=7106892
Unger, S.; Timmermann, D., "DPWSec: Devices profile for Web Services Security," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Internet; Internet of Things; Web services; ambient intelligence; assisted living; security of data; AAL; CPS; DPWSec; IoT; ambient assisted living; communication security; cyber-physical system; devices profile for Web services security; interoperable security framework; message level security; resource-constrained devices; Authentication; Authorization; Cryptography; Interoperability; Applied Cryptography; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 15-6331)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892
Van den Abeele, F.; Vandewinckele, T.; Hoebeke, J.; Moerman, I.; Demeester, P., "Secure Communication in IP-Based Wireless Sensor Networks via aTrusted Gateway," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: IP networks; Internet; authorisation; computer network security; energy conservation; internetworking; protocols; telecommunication power management; trusted computing; wireless sensor networks; DTLS handshake; Internet; WSN authorization; communication security; datagram TLS; end-to-end interactions; energy savings; heavy security protocol; latency reduction; trusted gateway; wireless sensor network IP integration; Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks;6LoWPAN;CoAP;DTLS;Gateway;IP;IoT; Wireless sensor networks (ID#: 15-6332)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106963&isnumber=7106892
Kurniawan, A.; Kyas, M., "A Trust Model-Based Bayesian Decision Theory in Large Scale Internet of Things," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106964
Abstract: In addressing the growing problem of security of Internet of Things, we present, from a statistical decision point of view, a naval approach for trust-based access control using Bayesian decision theory. We build a trust model, TrustBayes which represents a trust level for identity management in IoT. TrustBayes model is be applied to address access control on uncertainty environment where identities are not known in advance. The model consists of EX (Experience), KN (Knowledge) and RC (Recommendation) values which is be obtained in measurement while a IoT device requests to access a resource. A decision will be taken based model parameters and be computed using Bayesian decision rules. To evaluate our a trust model, we do a statistical analysis and simulate it using OMNeT++ to investigate battery usage. The simulation result shows that the Bayesian decision theory approach for trust based access control guarantees scalability and it is energy efficient as increasing number of devices and not affecting the functioning and performance.
Keywords: Bayes methods; Internet of Things; authorisation; decision theory; statistical analysis; Bayesian decision rules; EX value; KN value; OMNeT++; RC value; TrustBayes model; battery usage; experience value; identity management; knowledge value; large scale Internet-of-things; recommendation value; statistical analysis; statistical decision point; trust model-based Bayesian decision theory; trust-based access control; uncertainty environment; Batteries; Communication system security; Scalability; Wireless communication; Wireless sensor networks; Access Control; Decision making; Decision theory; Trust Management (ID#: 15-6333)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106964&isnumber=7106892
Ozvural, G.; Kurt, G.K., "Advanced Approaches for Wireless Sensor Network Applications and Cloud Analytics," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106979
Abstract: Although wireless sensor network applications are still at early stages of development in the industry, it is obvious that it will pervasively come true and billions of embedded microcomputers will become online for the purpose of remote sensing, actuation and sharing information. According to the estimations, there will be 50 billion connected sensors or things by the year 2020. As we are developing first to market wireless sensor-actuator network devices, we have chance to identify design parameters, define technical infrastructure and make an effort to meet scalable system requirements. In this manner, required research and development activities must involve several research directions such as massive scaling, creating information and big data, robustness, security, privacy and human-in-the-loop. In this study, wireless sensor networks and Internet of things concepts are not only investigated theoretically but also the proposed system is designed and implemented end-to-end. Low rate wireless personal area network sensor nodes with random network coding capability are used for remote sensing and actuation. Low throughput embedded IP gateway node is developed utilizing both random network coding at low rate wireless personal area network side and low overhead websocket protocol for cloud communications side. Service-oriented design pattern is proposed for wireless sensor network cloud data analytics.
Keywords: IP networks; Internet of Things; cloud computing; data analysis; microcomputers; network coding; personal area networks; protocols; random codes; remote sensing; service-oriented architecture; wireless sensor networks; Internet of things concept; actuation; cloud communications side; cloud data analytics; design parameter identification; embedded microcomputer; information sharing; low throughput embedded IP gateway; overhead websocket protocol; random network coding capability; service-oriented design pattern; wireless personal area network sensor node; wireless sensor-actuator network device; IP networks; Logic gates; Network coding; Protocols; Relays; Wireless sensor networks; Zigbee (ID#: 15-6334)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106979&isnumber=7106892
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: SIGMETRICS ’15 Portland, Oregon |
The ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS) is the flagship conference of the ACM special interest group for the computer systems performance evaluation community. The 2015 conference was held from June 16-18, 2015 in Portland, Oregon. “Spy vs. Spy: Rumor Source Obfuscation,” by Giulia Fanti (UC Berkeley); Peter Kairouz (University of Illinois at Urbana-Champaign); Sewoong Oh (University of Illinois at Urbana-Champaign); Pramod Viswanath (University of Illinois at Urbana-Champaign) was named Best Paper.
The works cited here specifically relate to the Science of Security and were among 63 papers presented. They were recovered pn July 8, 2015. The conference web page is available at http://www.sigmetrics.org/sigmetrics2015/.
Giulia Fanti, Peter Kairouz, Sewoong Oh, Pramod Viswanath. “Spy vs. Spy: Rumor Source Obfuscation.” SIGMETRICS '15 Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, Pages 271-284, June 2015. Doi: 10.1145/2745844.2745866
Abstract: Anonymous messaging platforms, such as Secret, Yik Yak and Whisper, have emerged as important social media for sharing one's thoughts without the fear of being judged by friends, family, or the public. Further, such anonymous platforms are crucial in nations with authoritarian governments; the right to free expression and sometimes the personal safety of the author of the message depend on anonymity. Whether for fear of judgment or personal endangerment, it is crucial to keep anonymous the identity of the user who initially posted a sensitive message. In this paper, we consider an adversary who observes a snapshot of the spread of a message at a certain time. Recent advances in rumor source detection shows that the existing messaging protocols are vulnerable against such an adversary. We introduce a novel messaging protocol, which we call adaptive diffusion, and show that it spreads the messages fast and achieves a perfect obfuscation of the source when the underlying contact network is an infinite regular tree: all users with the message are nearly equally likely to have been the origin of the message. Experiments on a sampled Facebook network show that it effectively hides the location of the source even when the graph is finite, irregular and has cycles.
Keywords: anonymous social media, privacy, rumor spreading (ID#: 15-6398)
URL: http://doi.acm.org/10.1145/2745844.2745866
Saleh Soltan, Mihalis Yannakakis, Gil Zussman. “Joint Cyber and Physical Attacks on Power Grids: Graph Theoretical Approaches for Information Recovery.” ACM SIGMETRICS Performance Evaluation Review, Volume 43, Issue 1, Pages 361-374, June 2015. Doi: 10.1145/2745844.2745846
Abstract: Recent events demonstrated the vulnerability of power grids to cyber attacks and to physical attacks. Therefore, we focus on joint cyber and physical attacks and develop methods to retrieve the grid state information following such an attack. We consider a model in which an adversary attacks a zone by physically disconnecting some of its power lines and blocking the information flow from the zone to the grid's control center. We use tools from linear algebra and graph theory and leverage the properties of the power flow DC approximation to develop methods for information recovery. Using information observed outside the attacked zone, these methods recover information about the disconnected lines and the phase angles at the buses. We identify sufficient conditions on the zone structure and constraints on the attack characteristics such that these methods can recover the information. We also show that it is NP-hard to find an approximate solution to the problem of partitioning the power grid into the minimum number of attack-resilient zones. However, since power grids can often be represented by planar graphs, we develop a constant approximation partitioning algorithm for these graphs. Finally, we numerically study the relationships between the grid's resilience and its structural properties, and demonstrate the partitioning algorithm on real power grids. The results can provide insights into the design of a secure control network for the smart grid.
Keywords: algorithms, cyber attacks, graph theory, information recovery, physical attacks, power grids (ID#: 15-6399)
URL: http://doi.acm.org/10.1145/2745844.2745846
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:15-6152)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
Attack Graphs and Privacy 2014 |
Security analysts use attack graphs for detection, defense, and forensics. An attack graph is defined as a representation of all paths through a system that end in a state where an intruder has successfully breached the system. Privacy needs add a complicating element to the trace. The research cited here looks at various aspects of attack graphs as they relate to privacy. All were presented in 2014.
Kiremire, A.R.; Brust, M.R.; Phoha, V.V., "Topology-Dependent Performance of Attack Graph Reconstruction in PPM-Based IP Traceback," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, vol., no., pp. 363, 370, 10-13 Jan. 2014. doi:10.1109/CCNC.2014.6866596
Abstract: A variety of schemes based on the technique of Probabilistic Packet Marking (PPM) have been proposed to identify Distributed Denial of Service (DDoS) attack traffic sources by IP traceback. These PPM-based schemes provide a way to reconstruct the attack graph - the network path taken by the attack traffic - hence identifying its sources. Despite the large amount of research in this area, the influence of the underlying topology on the performance of PPM-based schemes remains an open issue. In this paper, we identify three network-dependent factors that affect different PPM-based schemes uniquely giving rise to a variation in and discrepancy between scheme performance from one network to another. Using simulation, we also show the collective effect of these factors on the performance of selected schemes in an extensive set of 60 Internet-like networks. We find that scheme performance is dependent on the network on which it is implemented. We show how each of these factors contributes to a discrepancy in scheme performance in large scale networks. This discrepancy is exhibited independent of similarities or differences in the underlying models of the networks.
Keywords: computer network security; graph theory; telecommunication network routing; DDoS attack traffic sources; Internet-like networks; PPM-based IP traceback; PPM-based schemes; attack graph reconstruction; distributed denial of service attack traffic sources; large scale networks; probabilistic packet marking; topology-dependent performance; Computer crime; Convergence; IP networks; Network topology; Privacy; Topology (ID#: 15-5986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866596&isnumber=6866537
Datta, E.; Goyal, N., "Security Attack Mitigation Framework for the Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual, vol., no., pp. 1, 6, 27-30 Jan. 2014. doi:10.1109/RAMS.2014.6798457
Abstract: Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow them to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.
Keywords: Markov processes; cloud computing; security of data; virtualisation; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security; Silicon; Virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration (ID#: 15-5987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433
Sarkar, A.; Kohler, S.; Riddle, S.; Ludaescher, B.; Bishop, M., "Insider Attack Identification and Prevention Using a Declarative Approach," Security and Privacy Workshops (SPW), 2014 IEEE, vol., no., pp. 265, 276, 17-18 May 2014. doi:10.1109/SPW.2014.41
Abstract: A process is a collection of steps, carried out using data, by either human or automated agents, to achieve a specific goal. The agents in our process are insiders, they have access to different data and annotations on data moving in between the process steps. At various points in a process, they can carry out attacks on privacy and security of the process through their interactions with different data and annotations, via the steps which they control. These attacks are sometimes difficult to identify as the rogue steps are hidden among the majority of the usual non-malicious steps of the process. We define process models and attack models as data flow based directed graphs. An attack A is successful on a process P if there is a mapping relation from A to P that satisfies a number of conditions. These conditions encode the idea that an attack model needs to have a corresponding similarity match in the process model to be successful. We propose a declarative approach to vulnerability analysis. We encode the match conditions using a set of logic rules that define what a valid attack is. Then we implement an approach to generate all possible ways in which agents can carry out a valid attack A on a process P, thus informing the process modeler of vulnerabilities in P. The agents, in addition to acting by themselves, can also collude to carry out an attack. Once A is found to be successful against P, we automatically identify improvement opportunities in P and exploit them, eliminating ways in which A can be carried out against it. The identification uses information about which steps in P are most heavily attacked, and try to find improvement opportunities in them first, before moving onto the lesser attacked ones. We then evaluate the improved P to check if our improvement is successful. This cycle of process improvement and evaluation iterates until A is completely thwarted in all possible ways.
Keywords: computer crime; cryptography; data flow graphs; data privacy; directed graphs; logic programming; attack model; data flow based directed graphs; declarative approach; improvement opportunities; insider attack identification; insider attack prevention; logic rules; mapping relation; nonmalicious steps; privacy; process models; security; similarity match; vulnerability analysis; Data models; Diamonds; Impedance matching; Nominations and elections; Process control; Robustness; Security; Declarative Programming; Process Modeling; Vulnerability Analysis (ID#: 15-5988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957311&isnumber=6957265
Peipei Yi; Zhe Fan; Shuxiang Yin, "Privacy-Preserving Reachability Query Services for Sparse Graphs," Data Engineering Workshops (ICDEW), 2014 IEEE 30th International Conference on, vol., no., pp. 32, 35, March 31 2014-April 4 2014. doi:10.1109/ICDEW.2014.6818298
Abstract: This paper studies privacy-preserving query services for reachability queries under the paradigm of data outsourcing. Specifically, graph data have been outsourced to a third-party service provider (SP), query clients submit their queries to the SP, and the SP returns the query answers. However, SP may not always be trustworthy. Therefore, this paper considers protecting the structural information of the graph data and the query answers from the SP. This paper proposes simple yet optimized privacy-preserving 2-hop labeling. In particular, this paper proposes that the encrypted intermediate results of encrypted query evaluation are indistinguishable. The proposed technique is secure under chosen plaintext attack. We perform an experimental study on the effectiveness of the proposed techniques on both real-world and synthetic datasets.
Keywords: cryptography; data privacy; graph theory; query processing; data outsourcing; optimized privacy-preserving 2-hop labeling; plaintext attack; privacy-preserving reachability query services; query clients; sparse graphs; structural information; third-party service provider; Bipartite graph; Communication networks; Cryptography; Educational institutions; Labeling; Privacy; Query processing (ID#: 15-5989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818298&isnumber=6816871
Young, A.L.; Yung, M., "The Drunk Motorcyclist Protocol for Anonymous Communication," Communications and Network Security (CNS), 2014 IEEE Conference on, vol., no., pp. 157, 165, 29-31 Oct. 2014. doi:10.1109/CNS.2014.6997482
Abstract: The buses protocol is designed to provide provably anonymous communication on a connected graph. Figuratively speaking, a bus is a single unit of transport containing multiple seats. Each seat carries a ciphertext from a sender to a receiver. The buses approach aims to conceal traffic patterns by having buses constantly travel along fixed routes and is a step forward in concealing traffic compared to other anonymous communication protocols. Therefore, in this day in which Internet privacy is crucial it deserves further investigation. Here, we cryptanalyze the reduced-seat Buses protocol and we also present distinguishing attacks against the related Taxis protocol as well as P5. These attacks highlight the need to employ cryptosystems with key-privacy in such protocols. We then show that anonymity is not formally proven in the buses protocols. These findings motivate the need for a new provably secure connectionless anonymous messaging protocol. We present what we call the drunk motorcyclist (DM) protocol for anonymous messaging that overcomes these issues. We define the DM protocol, show a construction for it, and then prove that anonymity and confidentiality hold under Decision Diffie-Hellman (DDH) against global active adversaries. Our protocol demonstrates the new principle of flooding a complete graph or an expander graph with randomly walking ciphertexts that travel until their time-to-live values expire. This principle also exhibits fault-tolerance properties.
Keywords: Internet; computer network security; cryptographic protocols; electronic messaging; motorcycles; telecommunication traffic; DDH; DM protocol; Decision Diffie-Hellman; Internet privacy; Taxis protocol; anonymous communication protocol; ciphertext; complete graph; cryptosystem; drunk motorcyclist protocol; expander graph; fault tolerance properties; key privacy; provably secure connectionless anonymous messaging protocol; reduced-seat bus protocol cryptanalyzation; time-to-live values; traffic concealment pattern; Encryption; Generators; Protocols; Public key; Receivers (ID#:15-5990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997482&isnumber=6997445
Xin Hu; Ting Wang; Stoecklin, M.P.; Schales, D.L.; Jiyong Jang; Sailer, R., "Asset Risk Scoring in Enterprise Network with Mutually Reinforced Reputation Propagation," Security and Privacy Workshops (SPW), 2014 IEEE, vol., no., pp. 61, 64, 17-18 May 2014. doi:10.1109/SPW.2014.18
Abstract: Cyber security attacks are becoming ever more frequent and sophisticated. Enterprises often deploy several security protection mechanisms, such as anti-virus software, intrusion detection prevention systems, and firewalls, to protect their critical assets against emerging threats. Unfortunately, these protection systems are typically "noisy", e.g., regularly generating thousands of alerts every day. Plagued by false positives and irrelevant events, it is often neither practical nor cost-effective to analyze and respond to every single alert. The main challenge faced by enterprises is to extract important information from the plethora of alerts and to infer potential risks to their critical assets. A better understanding of risks will facilitate effective resource allocation and prioritization of further investigation. In this paper, we present MUSE, a system that analyzes a large number of alerts and derives risk scores by correlating diverse entities in an enterprise network. Instead of considering a risk as an isolated and static property, MUSE models the dynamics of a risk based on the mutual reinforcement principle. We evaluate MUSE with real-world network traces and alerts from a large enterprise network, and demonstrate its efficacy in risk assessment and flexibility in incorporating a wide variety of data sets.
Keywords: business data processing; firewalls; invasive software; risk analysis; MUSE; antivirus software; asset risk scoring; cyber security attacks; enterprise network; firewalls; intrusion detection-prevention systems; mutually reinforced reputation propagation; risk assessment; security protection mechanisms; Belief propagation; Bipartite graph; Data mining; Intrusion detection; Malware; Servers; Risk Scoring; mutually reinforced principles; reputation propagation (ID#: 15-5991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957286&isnumber=6957265
Shan Chang; Hongzi Zhu; Mianxiong Dong; Ota, K.; Xiaoqiang Liu; Guangtao Xue; Xuemin Shen, "BusCast: Flexible and Privacy Preserving Message Delivery Using Urban Buses," Parallel and Distributed Systems (ICPADS), 2014 20th IEEE International Conference on, vol., no., pp. 502, 509, 16-19 Dec. 2014. doi:10.1109/PADSW.2014.7097847
Abstract: With the popularity of intelligent mobile devices, enormous urban information has been generated and required by the public. In response, ShanghaiGrid (SG) aims to providing abundant information services to the public. With fixed schedule and urban-wide coverage, an appealing service in SG is to provide free message delivery service to the public using buses, which allows mobile device users to send messages to locations of interest via buses. The main challenge in realizing this service is to provide efficient routing scheme with privacy preservation under highly dynamic urban traffic condition. In this paper, we present an innovative scheme BusCast to tackle this problem. In BusCast, buses can pick up and forward personal messages to their destination locations in a store-carry-forward fashion. For each message, BusCast conservatively associates a routing graph rather than a fixed routing path with the message in order to adapt the dynamic of urban traffic. Meanwhile, the privacy information about the user and the message destination is concealed from both intermediate relay buses and outside adversaries. Both rigorous privacy analysis and extensive trace-driven simulations demonstrate the efficacy of BusCast scheme.
Keywords: data privacy; traffic engineering computing; transportation; BusCast scheme; ShanghaiGrid; information service; intelligent mobile device; message destination; privacy analysis; privacy information; privacy preserving message delivery; routing graph; routing scheme; trace-driven simulation; urban bus; urban traffic condition; Bismuth; Delays; Relays; Routing; anonymous communication; backward unlinkability; message delivery; traffic analysis attacks; vehicular networks (ID#: 15-5992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097847&isnumber=7097773
Maag, M.L.; Denoyer, L.; Gallinari, P., "Graph Anonymization Using Machine Learning," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, vol., no., pp. 1111, 1118, 13-16 May 2014. doi:10.1109/AINA.2014.20
Abstract: Data privacy is a major problem that has to be considered before releasing datasets to the public or even to a partner company that would compute statistics or make a deep analysis of these data. This is insured by performing data anonymization as required by legislation. In this context, many different anonymization techniques have been proposed in the literature. These methods are usually specific to a particular de-anonymization procedure--or attack--one wants to avoid, and to a particular known set of characteristics that have to be preserved after the anonymization. They are difficult to use in a general context where attacks can be of different types, and where measures are not known to the anonymizer. The paper proposes a novel approach for automatically finding an anonymization procedure given a set of possible attacks and a set of measures to preserve. The approach is generic and based on machine learning techniques. It allows us to learn directly an anonymization function from a set of training data so as to optimize a trade off between privacy risk and utility loss. The algorithm thus allows one to get a good anonymization procedure for any kind of attacks, and any characteristic in a given set. Experiments made on two datasets show the effectiveness and the genericity of the approach.
Keywords: data privacy; graph theory; learning (artificial intelligence); risk management; data anonymization; de-anonymization procedure; graph anonymization; machine learning; privacy risk; training data; utility loss; Context; Data privacy; Loss measurement; Machine learning algorithms; Noise; Privacy; Social network services; Graph Anonymization; Machine Learning; Privacy (ID#:15-5993)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838788&isnumber=6838626
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Big Data Security Metrics 2014 |
Measurement is a hard problem in the Science of Security. When applied to Big Data, the problems of measurement in security systems are compounded. The works cited here address these problems and were presented in 2014.
Kotenko, I.; Novikova, E., "Visualization of Security Metrics for Cyber Situation Awareness," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol., no., pp. 506 , 513, 8-12 Sept. 2014. doi:10.1109/ARES.2014.75
Abstract: One of the important direction of research in situational awareness is implementation of visual analytics techniques which can be efficiently applied when working with big security data in critical operational domains. The paper considers a visual analytics technique for displaying a set of security metrics used to assess overall network security status and evaluate the efficiency of protection mechanisms. The technique can assist in solving such security tasks which are important for security information and event management (SIEM) systems. The approach suggested is suitable for displaying security metrics of large networks and support historical analysis of the data. To demonstrate and evaluate the usefulness of the proposed technique we implemented a use case corresponding to the Olympic Games scenario.
Keywords: Big Data; computer network security; data analysis; data visualisation; Olympic Games scenario; SIEM systems; big data security; cyber situation awareness; network security status; security information and event management systems; security metric visualization; visual analytics technique; Abstracts; Availability; Layout; Measurement; Security; Visualization; cyber situation awareness; high level metrics visualization; network security level assessment; security information visualization (ID#: 15-5776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980325&isnumber=6980232
Vaarandi, R.; Pihelgas, M., "Using Security Logs for Collecting and Reporting Technical Security Metrics," Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 294, 299, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.53
Abstract: During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Keywords: Big Data; computer network security; big data; log analysis methods; log analysis techniques; open source technology; security logs; technical security metric collection; technical security metric reporting; Correlation; Internet; Measurement; Monitoring; Peer-to-peer computing; Security; Workstations; security log analysis; security metrics (ID#: 15-5777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956774&isnumber=6956719
Jiang, F.; Luo, D., "A New Coupled Metric Learning for Real-time Anomalies Detection with High-Frequency Field Programmable Gate Arrays," Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, vol., no., pp. 1254, 1261, 14-14 Dec. 2014. doi:10.1109/ICDMW.2014.203
Abstract: Billions of internet end-users and device to device connections contribute to the significant data growth in recent years, large scale, unstructured, heterogeneous data and the corresponding complexity present challenges to the conventional real-time online fraud detection system security. With the advent of big data era, it is expected the data analytic techniques to be much faster and more efficient than ever before. Moreover, one of the challenges with many modern algorithms is that they run too slowly in software to have any practical value. This paper proposes a Field Programmable Gate Array (FPGA) -based intrusion detection system (IDS), driven by a new coupled metric learning to discover the inter- and intra-coupling relationships against the growth of data volumes and item relationship to provide a new approach for efficient anomaly detections. This work is experimented on our previously published NetFlow-based IDS dataset, which is further processed into the categorical data for coupled metric learning purpose. The overall performance of the new hardware system has been further compared with the presence of conventional Bayesian classifier and Support Vector Machines classifier. The experimental results show the very promising performance by considering the coupled metric learning scheme in the FPGA implementation. The false alarm rate is successfully reduced down to 5% while the high detection rate (=99.9%) is maintained.
Keywords: Internet; data analysis; field programmable gate arrays; security of data; support vector machines; Bayesian classifier; FPGA-based intrusion detection system; Internet end-users; NetFlow-based IDS dataset; data analytic techniques; device to device connections; false alarm rate; high-frequency field programmable gate arrays; metric learning; real-time anomalies detection; real-time online fraud detection system security; support vector machines classifier; Field programmable gate arrays; Intrusion detection; Measurement; Neural networks; Real-time systems; Software; Vectors; Metric Learning; Field Programmable Gate Arrays; Netflow; Intrusion Detection Systems (ID#: 15-5778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022747&isnumber=7022545
Okuno, S.; Asai, H.; Yamana, H., "A Challenge of Authorship Identification for Ten-Thousand-Scale Microblog Users," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 52, 54, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004491
Abstract: Internet security issues require authorship identification for all kinds of internet contents; however, authorship identification for microblog users is much harder than other documents because microblog texts are too short. Moreover, when the number of candidates becomes large, i.e., big data, it will take long time to identify. Our proposed method solves these problems. The experimental results show that our method successfully identifies the authorship with 53.2% of precision out of 10,000 microblog users in the almost half execution time of previous method.
Keywords: Big Data; security of data; social networking (online); Internet security issues; authorship identification; big data; microblog texts; ten-thousand-scale microblog users; Big data; Blogs; Computers; Distance measurement; Internet; Security; Training; Twitter; authorship attribution; authorship detection; authorship identification; microblog (ID#: 15-5779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004491&isnumber=7004197
Yu Liu; Jianwei Niu; Lianjun Yang; Lei Shu, "eBPlatform: An IoT-based System for NCD Patients Homecare in China," Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 2448, 2453, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037175
Abstract: The number of Non-communicable disease (NCD) patients in China is growing rapidly, which is far beyond the capacity of the national health and social security system. Community health stations do not have enough doctors to take care of their patients in traditional ways. In order to establish a bridge between doctors and patients, we propose eBPlatform, which is an information system based on the Internet of Things (IoT) technology for homecare of the NCD patients. The eBox is a sensor which can be deployed in the patient's home for blood pressure measurement, blood sugar measurement and ECG signals collection. Some services are running on the remote server, which can receive the samples, filter and analyze the ECG signals. The uploaded data will be pushed to a web portal, with which doctors provide treatments online. The system requirements, design and implementation of hardware and software are discussed respectively. Finally, we investigate a case study with 50 NCD patients for half a year in Beijing. The results show that eBPlatform can increase the efficiency of the doctor and make a big progress to eliminate the numerical imbalance between community medical practitioners and NCD patients.
Keywords: Internet of Things; blood pressure measurement; diseases; electrocardiography; filtering theory; health care; medical information systems; medical signal processing; portals; signal sampling; Beijing; China; ECG signal analysis; ECG signal collection; ECG signal filtering; IoT-based system; NCD patient homecare; Web portal; blood pressure measurement; blood sugar measurement; community health stations; community medical practitioners; data upload; eBPlatform; eBox; hardware design; hardware implementation; information system; national health; noncommunicable disease patients; numerical imbalance elimination; online treatment; patient care; patient home; remote server; social security system; software design; software implementation; system requirements; Biomedical monitoring; Biosensors; Blood pressure; Electrocardiography; Medical services; Pressure measurement; Servers; IoT application; eHealth; patients homecare; sensor network (ID#: 15-5780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037175&isnumber=7036769
Gao Hui; Niu Haibo; Luo Wei, "Internet Information Source Discovery Based on Multi-Seeds Cocitation," Security, Pattern Analysis, and Cybernetics (SPAC), 2014 International Conference on, vol., no., pp. 368, 371, 18-19 Oct. 2014. doi:10.1109/SPAC.2014.6982717
Abstract: The technology of Internet information source discovery on specific topic is the groundwork of information acquisition in current big data era. This paper presents a multi-seeds cocitation algorithm to find new Internet information sources. The proposed algorithm is based on cocitation, but what difference with the traditional algorithms is that we use multiple websites on specific topic as input seeds. Then we induce Combined Cocitation Degree(CCD) to measure the relevancy of newly found websites, which is that the new websites have higher combined cocitation degree and are more topic related. Finally a websites collection of the biggest CCD is referred to as the new Internet information sources on the specific topic. The experiments show that the proposed method outperforms traditional algorithms in the scenarios we tested.
Keywords: Big Data; Internet; Web sites; citation analysis; data mining; CCD; Internet information source discovery; Web sites; combined cocitation degree; information acquisition; multiseeds cocitation; relevancy measurement; Algorithm design and analysis; Big data; Charge coupled devices; Google; Internet; Noise; Web pages; big data; cocitation; information source; related website (ID#: 15-5781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982717&isnumber=6982642
Si-Yuan Jing; Jin Yang; Kun She, "A Parallel Method for Rough Entropy Computation Using MapReduce," Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, vol., no., pp. 707, 710, 15-16 Nov. 2014. doi:10.1109/CIS.2014.41
Abstract: Rough set theory has been proven to be a successful computational intelligence tool. Rough entropy is a basic concept in rough set theory and it is usually used to measure the roughness of information set. Existing algorithms can only deal with small data set. Therefore, this paper proposes a method for parallel computation of entropy using MapReduce, which is hot in big data mining. Moreover, corresponding algorithm is also put forward to handle big data set. Experimental results show that the proposed parallel method is effective.
Keywords: Big Data; data mining; entropy; mathematics computing; parallel programming; rough set theory; MapReduce; big data mining; big data set handling; computational intelligence tool; information set roughness measurement; parallel computation method; rough entropy computation; rough set theory; Big data; Clustering algorithms; Computers; Data mining; Entropy; Information entropy; Set theory; Data Mining; Entropy; Hadoop; MapReduce; Rough set theory (ID#: 15-5782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016989&isnumber=7016831
Agrawal, R.; Imran, A.; Seay, C.; Walker, J., "A Layer Based Architecture for Provenance in Big Data," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 1, 7, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004468
Abstract: Big data is a new technology wave that makes the world awash in data. Various organizations accumulate data that are difficult to exploit. Government databases, social media, healthcare databases etc. are the examples of the big data. Big data covers absorbing and analyzing huge amount of data that may have originated or processed outside of the organization. Data provenance can be defined as origin and process of data. It carries significant information of a system. It can be useful for debugging, auditing, measuring performance and trust in data. Data provenance in big data is relatively unexplored topic. It is necessary to appropriately track the creation and collection process of the data to provide context and reproducibility. In this paper, we propose an intuitive layer based architecture of data provenance and visualization. In addition, we show a complete workflow of tracking provenance information of big data.
Keywords: Big Data; data visualisation; software architecture; auditing; data analysis; data origin; data processing; data provenance; data trust; data visualization; debugging; government databases; healthcare databases; layer based architecture; performance measurement; social media; system information; Big data; Computer architecture; Data models; Data visualization; Databases; Educational institutions; Security; Big data; Provenance; Query; Visualization (ID#: 15-5783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004468&isnumber=7004197
Kiss, I.; Genge, B.; Haller, P.; Sebestyen, G., "Data Clustering-Based Anomaly Detection in Industrial Control Systems," Intelligent Computer Communication and Processing (ICCP), 2014 IEEE International Conference on, vol., no., pp. 275, 281, 4-6 Sept. 2014. doi:10.1109/ICCP.2014.6937009
Abstract: Modern Networked Critical Infrastructures (NCI), involving cyber and physical systems, are exposed to intelligent cyber attacks targeting the stable operation of these systems. In order to ensure anomaly awareness, the observed data can be used in accordance with data mining techniques to develop Intrusion Detection Systems (IDS) or Anomaly Detection Systems (ADS). There is an increase in the volume of sensor data generated by both cyber and physical sensors, so there is a need to apply Big Data technologies for real-time analysis of large data sets. In this paper, we propose a clustering based approach for detecting cyber attacks that cause anomalies in NCI. Various clustering techniques are explored to choose the most suitable for clustering the time-series data features, thus classifying the states and potential cyber attacks to the physical system. The Hadoop implementation of MapReduce paradigm is used to provide a suitable processing environment for large datasets. A case study on a NCI consisting of multiple gas compressor stations is presented.
Keywords: Big Data; control engineering computing; critical infrastructures; data mining; industrial control; pattern clustering; real-time systems; security of data; ADS; Big Data technology; Hadoop implementation; IDS; MapReduce paradigm; NCI; anomaly awareness; anomaly detection systems; clustering techniques; cyber and physical systems; cyber attack detection; cyber sensor; data clustering-based anomaly detection; data mining techniques; industrial control systems; intelligent cyber attacks; intrusion detection systems; large data sets; modern networked critical infrastructures; multiple gas compressor stations; physical sensor; real-time analysis; sensor data; time-series data feature; Big data; Clustering algorithms; Data mining; Density measurement; Security; Temperature measurement; Vectors; anomaly detection; big data; clustering; cyber-physical security; intrusion detection (ID#: 15-5784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6937009&isnumber=6936959
Singhal, Rekha; Nambiar, Manoj; Sukhwani, Harish; Trivedi, Kishor, "Performability Comparison of Lustre and HDFS for MR Applications," Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on, vol., no., pp. 51, 51, 3-6 Nov. 2014. doi:10.1109/ISSREW.2014.115
Abstract: With its simple principles to achieve parallelism and fault tolerance, the Map-reduce framework has captured wide attention, from traditional high performance computing to marketing organizations. The most popular open source implementation of this framework is Hadoop. Today, the Hadoop stack comprises of various software components including the Hadoop Distributed File System (HDFS), the distributed storage layer amongst others such as GPFS and WASB. The traditional high performance computing has always been at the forefront of developing and deploying cutting edge technology and solutions such as Lustre, a Parallel IO file systems, to meet its ever growing need. To support new and upcoming use cases, there is a focus on tighter integration of Hadoop with existing HPC stacks. In this paper, we share our work on one such integration by analyzing an FSI workload built using map reduce framework and evaluating the performance and reliability of the application on an integrated stack with Hadoop and Lustre through Hadoop extensions such as Hadoop Adapter for Lustre (HAL) and HPC Adapter for MapReduce (HAM) developed by Intel, while comparing the performance against the Hadoop Distributed File System (HDFS). We also carried out perform ability analysis of both the systems, where HDFS ensures reliability using replication factor and Lustre does not replicate any data but ensures reliability by having multiple OSSs connecting to multiple OSTs. The environment used for this evaluation is a 16 nodes HDDP cluster hosted in the Intel Big Data Lab in Swindon (UK). The cluster was divided into two clusters. One 8 node cluster was set up with CDH 5.0.2 and HDFS and another 8 node was set up with CDH 5.0.2 connected to Lustre through Intel HAL. We use Intel Enteprise Edition for Lustre 2.0 for the experiment based on Lustre 2.5. The Lustre setup includes 1 Meta Data Server (MDS) with 1 Meta Data Target (MDT) and 1 Management Target (MGT) and 4 Object Storage Servers (OSSs) with - 6 Object Storage Targets (OSTs). Both the systems were evaluated on performance metric 'average query response time' for FSI workload. The data is generated based on FSI application schema while MR jobs are written for few functionalities/queries of the FSI application which are used for the evaluation exercise. Apart from single query execution, both the systems were evaluated for concurrent workload as well. Tests were run for application data volumes varying from 100 GB to 7 TB. From our experiments, with appropriate tuning of Lustre file system, we observe that MR applications on Lustre platform perform at least twice better than that on HDFS. We conducted perform ability analysis of both the systems using Markov Reward Model. We propose linear extrapolation for estimating average query execution time for states exhibiting failure for some nodes and calculated the perform ability with reward for working states as the average query execution time. We assume that the time to failure, detect failure, and repair of both compute nodes as well data nodes are exponentially distributed, and took reasonable parameter values for the same. From our analysis, Expected query execution time for MR applications on Lustre file platform is at least half that of the applications on HDFS platform.
Keywords: Artificial neural networks; Disk drives; File systems; Measurement; Random access memory; Security; Switches; HDFS; LUSTRE; MR applications; Performability; Performance; Query Execution Time (ID#: 15-5785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983800&isnumber=6983760
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Coding Theory and Security, 2014 Part 1 |
Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014.
Matsumoto, R., "Coding Theoretic Study of Secure Network Coding and Quantum Secret Sharing," Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 335, 337, 26-29 Oct. 2014. doi: (not provided)
Abstract: The common goal of (classical) secure network coding and quantum secret sharing is to encode secret so that an adversary has as little information of the secret as possible. Both can be described by a nested pair of classical linear codes, while the strategies available to the adversary are different. The security properties of both schemes are closely related to combinatorial properties of the underlying linear codes. We survey connections among them.
Keywords: linear codes; network coding; quantum cryptography; telecommunication security; coding theoretic study; combinatorial properties; linear codes; quantum secret sharing; secure network coding; security properties; Australia; Cryptography; Hamming weight; Linear codes; Network coding; Quantum mechanics (ID#: 15-4842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979860&isnumber=6979787
Hibshi, H.; Breaux, T.; Riaz, M.; Williams, L., "Towards a Framework to Measure Security Expertise in Requirements Analysis," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, vol., no., pp. 13, 18, 25-25 Aug. 2014. doi:10.1109/ESPRE.2014.6890522
Abstract: Research shows that commonly accepted security requirements are not generally applied in practice. Instead of relying on requirements checklists, security experts rely on their expertise and background knowledge to identify security vulnerabilities. To understand the gap between available checklists and practice, we conducted a series of interviews to encode the decision-making process of security experts and novices during security requirements analysis. Participants were asked to analyze two types of artifacts: source code, and network diagrams for vulnerabilities and to apply a requirements checklist to mitigate some of those vulnerabilities. We framed our study using Situation Awareness-a cognitive theory from psychology-to elicit responses that we later analyzed using coding theory and grounded analysis. We report our preliminary results of analyzing two interviews that reveal possible decision-making patterns that could characterize how analysts perceive, comprehend and project future threats which leads them to decide upon requirements and their specifications, in addition, to how experts use assumptions to overcome ambiguity in specifications. Our goal is to build a model that researchers can use to evaluate their security requirements methods against how experts transition through different situation awareness levels in their decision-making process.
Keywords: decision making; formal specification; security of data; source code (software); coding theory; cognitive theory; decision-making patterns; decision-making process; grounded analysis; network diagrams; requirements checklist; security expertise; security experts; security requirements analysis; security vulnerabilities; situation awareness; source code; specifications ambiguity; Decision making; Encoding; Firewalls (computing); Interviews; Software; Uncertainty; Security; decision-making; patterns; requirements analysis; situation awareness (ID#: 15-4843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890522&isnumber=6890516
Shuiyin Liu; Yi Hong; Viterbo, E., "On Measures of Information Theoretic Security," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 309, 310, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970843
Abstract: While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
Keywords: cryptography; information theory; Shannon perfect secrecy; computational security; ideal secrecy; information theoretic cryptosystem; information theoretic security; Australia; Cryptography; Entropy; Information theory; Probability; Vectors
(ID#: 15-4844)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970843&isnumber=6970773
Tao Fang; Min Li, "Controlled Quantum Secure Direct Communication Protocol Based on Extended Three-Particle GHZ State Decoy," Network-Based Information Systems (NBiS), 2014 17th International Conference on vol., no., pp. 450, 454, 10-12 Sept. 2014. doi:10.1109/NBiS.2014.44
Abstract: Extended three-particle GHZ state decoy is introduced in controlled quantum secure direct communication to improve eavesdropping detection probability and prevent correlation-elicitation (CE) attack. Each particle of extended three-particle GHZ state decoy is inserted into sending particles to detect eavesdroppers, which reaches 63% eavesdropping detection probability. And decoy particles prevent the receiver from obtaining the correct correlation between particle 1 and particle 2 before the sender coding on them, so that he can not get any secret information without the controller's permission. In the security analysis, the maximum amount of information that a qubit contains is obtained by introducing the entropy theory method, and two decoy strategies are compared quantitatively. If the eavesdroppers intend to eavesdrop on secret information, the per qubit detection rate of using only two particles of extended three-particle GHZ state as decoy is 58%, while the presented protocol using three particles of extended three-particle GHZ state as decoy reaches per qubit 63%.
Keywords: entropy; probability; protocols; quantum communication; telecommunication security; controlled quantum secure direct communication protocol; correlation-elicitation attack; eavesdropping detection probability; entropy theory method; extended three-particle state decoy; per qubit detection rate; security analysis; Barium; Cryptography; Encoding; Protocols; Quantum mechanics; Receivers; CQSDC; decoy; eavesdropping detection; extend three-particle GHZ state; security (ID#: 15-4845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023992&isnumber=7023898
Jiantao Zhou; Xianming Liu; Au, O.C.; Yuan Yan Tang, "Designing an Efficient Image Encryption-Then-Compression System via Prediction Error Clustering and Random Permutation," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 1, pp. 39, 50, Jan. 2014. doi:10.1109/TIFS.2013.2291625
Abstract: In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency.
Keywords: arithmetic codes; data compression; image coding; pattern clustering; prediction theory; random codes; ETC; arithmetic coding-based approach; image encryption-then-compression system design; lossless compression; lossless image coder; lossy compression; lossy image coder; prediction error clustering; random permutation; security; Bit rate; Decoding; Encryption; Image coding; Image reconstruction; Compression of encrypted image; encrypted domain signal processing (ID#: 15-4846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670767&isnumber=6684617
Jianghua Zhong; Dongdai Lin, "Stability of Nonlinear Feedback Shift Registers," Information and Automation (ICIA), 2014 IEEE International Conference on, vol., no., pp. 671, 676, 28-30 July 2014. doi:10.1109/ICInfA.2014.6932738
Abstract: Convolutional code are widely used in many applications such as digital video, radio, and mobile communication. Nonlinear feedback shift registers (NFSRs) are the main building blocks in many convolutional decoders. A decoding error may result in a successive of further decoding errors. However, a stable NFSR can limit such an error-propagation. This paper studies the stability of NFSRs using a Boolean network approach. A Boolean network is an autonomous system that evolves as an automaton through Boolean functions. An NFSR can be viewed as a Boolean network. Based on its Boolean network representation, some sufficient and necessary conditions are provided for globally (locally) stable NFSRs. To determine the global stability of an NFSR, the Boolean network approach requires lower time complexity than the exhaustive search and the Lyapunov's direct method.
Keywords: Boolean functions; automata theory; computational complexity; shift registers; Boolean functions; Boolean network representation; Lyapunov direct method; NFSR; automaton; convolutional code; convolutional decoders; decoding error; digital video; error-propagation; exhaustive search; global stability; mobile communication; nonlinear feedback shift register stability; radio; time complexity; Boolean functions; Linear systems; Shift registers; Stability criteria; Time complexity; Transient analysis; Boolean function; Boolean network; Nonlinear feedback shift register; stability (ID#: 15-4847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932738&isnumber=6932615
Alodeh, M.; Chatzinotas, S.; Ottersten, B., "A Multicast Approach for Constructive Interference Precoding in MISO Downlink Channel," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2534, 2538, June 29, 2014-July 4, 2014. doi:10.1109/ISIT.2014.6875291
Abstract: This paper studies the concept of jointly utilizing the data information (DI) and channel state information (CSI) in order to design symbol-level precoders for a multiple input and single output (MISO) downlink channel. In this direction, the interference among the simultaneous data streams is transformed to useful signal that can improve the signal to interference noise ratio (SINR) of the downlink transmissions. We propose a maximum ratio transmissions (MRT) based algorithm that jointly exploits DI and CSI to gain the benefits from these useful signals. In this context, a novel framework to minimize the power consumption is proposed by formalizing the duality between the constructive interference downlink channel and the multicast channels. The numerical results have shown that the proposed schemes outperform other state of the art techniques.
Keywords: channel coding; cochannel interference; multicast communication; precoding; telecommunication channels; MISO downlink channel; SINR; channel state information; constructive interference downlink channel; constructive interference precoding; data information; downlink transmissions; maximum ratio transmissions; multicast approach; multicast channels; multiple input and single output downlink channel; power consumption; signal to interference noise ratio; simultaneous data streams; symbol-level precoders; Correlation; Downlink; Information theory; Interference; Minimization; Signal to noise ratio; Vectors (ID#: 15-4848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875291&isnumber=6874773
Aydin, A.; Alkhalaf, M.; Bultan, T., "Automated Test Generation from Vulnerability Signatures," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, vol., no., pp. 193, 202, March 31 2014-April 4 2014. doi:10.1109/ICST.2014.32
Abstract: Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
Keywords: Web services; authoring languages; automata theory; digital signatures; program diagnostics; program testing; attack string discovery; automata-based static string analysis techniques; automated test case generation; automatic vulnerability signature computation; insecure Web applications; path coverage; scripting languages; state; static string analysis undecidability; transition; Algorithm design and analysis; Approximation methods; Automata; Databases; HTML; Security; Testing; automata-based test generation; string analysis; validation and sanitization; vulnerability signatures (ID#: 15-4849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823881&isnumber=6823846
Koyluoglu, O.O.; Rawat, A.S.; Vishwanath, S., "Secure Cooperative Regenerating Codes for Distributed Storage Systems," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5228, 5244, Sept. 2014. doi:10.1109/TIT.2014.2319271
Abstract: Regenerating codes enable trading off repair bandwidth for storage in distributed storage systems (DSS). Due to their distributed nature, these systems are intrinsically susceptible to attacks, and they may also be subject to multiple simultaneous node failures. Cooperative regenerating codes allow bandwidth efficient repair of multiple simultaneous node failures. This paper analyzes storage systems that employ cooperative regenerating codes that are robust to (passive) eavesdroppers. The analysis is divided into two parts, studying both minimum bandwidth and minimum storage cooperative regenerating scenarios. First, the secrecy capacity for minimum bandwidth cooperative regenerating codes is characterized. Second, for minimum storage cooperative regenerating codes, a secure file size upper bound and achievability results are provided. These results establish the secrecy capacity for the minimum storage scenario for certain special cases. In all scenarios, the achievability results correspond to exact repair, and secure file size upper bounds are obtained using min-cut analyses over a suitable secrecy graph representation of DSS. The main achievability argument is based on an appropriate precoding of the data to eliminate the information leakage to the eavesdropper.
Keywords: precoding; security of data; storage management; DSS; DSS secrecy graph representation; data precoding; distributed storage system; eavesdropper; min-cut analysis; minimum bandwidth cooperative regenerating code; minimum storage cooperative regenerating code; Bandwidth; Decision support systems; Encoding; Maintenance engineering; Resilience; Security; Upper bound; Coding for distributed storage systems; cooperative repair; minimum bandwidth cooperative regenerating (MBCR) codes; minimum storage cooperative regenerating (MSCR) codes; security (ID#: 15-4850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807720&isnumber=6878505
Geil, O.; Martin, S.; Matsumoto, R.; Ruano, D.; Yuan Luo, "Relative Generalized Hamming Weights of One-Point Algebraic Geometric Codes," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 137, 141, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970808
Abstract: Security of linear ramp secret sharing schemes can be characterized by the relative generalized Hamming weights of the involved codes [23], [22]. In this paper we elaborate on the implication of these parameters and we devise a method to estimate their value for general one-point algebraic geometric codes. As it is demonstrated, for Hermitian codes our bound is often tight. Furthermore, for these codes the relative generalized Hamming weights are often much larger than the corresponding generalized Hamming weights.
Keywords: Hamming codes; algebraic geometric codes; security of data; Hermitian codes; general one-point algebraic geometric codes; linear ramp secret sharing schemes security; relative generalized Hamming weights; Cryptography; Galois fields; Geometry; Hamming weight; Linear codes; Vectors (ID#: 15-4851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970808&isnumber=6970773
Poonia, A.S.; Singh, S., "Malware Detection by Token Counting," Contemporary Computing and Informatics (IC3I), 2014 International Conference on, vol., no., pp. 1285, 1288, 27-29 Nov. 2014. doi:10.1109/IC3I.2014.7019691
Abstract: Malicious software (or malware) is defined as software that fulfills the harmful intent of an attacker and it is one of the most pressing and major security threats facing the Internet today. Antivirus companies typically have to deal with thousands of new malware every day. If antivirus software has large database then there is more chance of false positive and false negative, so to store the huge database in the virus definition, is very complex task. In this research paper the new concept is that, in spite of storing complete signatures of the virus, we can store the various tokens and their frequency in the program. In this process we will use only tokens of executable statements, so there is no problem if dead code in malware is also present. In the tokens we use two definitions one is operator and another is operand. So we can form new type of signature of a malware that take less size in the database and also give less negative false and positive false. The benefits of using the token concept includes; fewer databases storage memory is required; estimate size of the malicious software can be calculated; easy estimation of the complexity of the malicious program; If the malicious program has dead code or repetition of statements then also we can find accurate signature of the program by using executable statements only. So, by this process we can detect malicious code easily with less database storage memory with more precise way.
Keywords: Internet; database management systems; invasive software; Internet; antivirus software; database storage memory; dead code; executable statements; malicious program; malicious software; malware detection; malware signature; security threats; token concept; token counting; virus definition; Complexity theory; Computers; Databases; Estimation; Malware; Software; Operand; Operator; Tokens; frequency; malicious code complexity (ID#: 15-4852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019691&isnumber=7019573
Zonouz, S.; Rrushi, J.; McLaughlin, S., "Detecting Industrial Control Malware Using Automated PLC Code Analytics," Security & Privacy, IEEE, vol. 12, no. 6, pp. 40, 47, Nov.-Dec. 2014. doi:10.1109/MSP.2014.113
Abstract: The authors discuss their research on programmable logic controller (PLC) code analytics, which leverages safety engineering to detect and characterize PLC infections that target physical destruction of power plants. Their approach also draws on control theory, namely the field of engineering and mathematics that deals with the behavior of dynamical systems, to reverse-engineer safety-critical code to identify complex and highly dynamic safety properties for use in the hybrid code analytics approach.
Keywords: control engineering computing; industrial control; invasive software; production engineering computing; program diagnostics; programmable controllers; safety-critical software; automated PLC code analytics; control theory; hybrid code analytics approach; industrial control malware detection; programmable logic controllers; reverse-engineer safety-critical code; safety engineering; Computer security; Control systems; Energy management; Industrial control; Malware; Model checking; Process control; Reverse engineering; Safety; Safety devices; PLC code analytics; formal models; industrial control malware; model checking; process control systems; reverse engineering; safety-critical code; security (ID#: 15-4853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006408&isnumber=7006395
Koga, H.; Honjo, S., "A Secret Sharing Scheme Based on a Systematic Reed-Solomon Code and Analysis of Its Security for a General Class of Sources," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1351, 1355, June 29 2014 - July 4 2014. doi:10.1109/ISIT.2014.6875053
Abstract: In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SL and n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.
Keywords: Reed-Solomon codes; telecommunication security; linear equations; ramp secret sharing scheme; shortened systematic Reed-Solomon code; Cryptography; Equations; Probability distribution; Random variables; Reed-Solomon codes
(ID#: 15-4854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875053&isnumber=6874773
Mokhtar, M.A.; Gobran, S.N.; El-Badawy, E.-S.A.-M., "Colored Image Encryption Algorithm Using DNA Code and Chaos Theory," Computer and Communication Engineering (ICCCE), 2014 International Conference on, vol. no., pp. 12, 15, 23-25 Sept. 2014. doi:10.1109/ICCCE.2014.17
Abstract: DNA computing and Chaos theory introduce promising research areas at the field of Cryptography. In this paper, a stream cipher algorithm for Image Encryption is introduced. The chaotic logistic map is used for confusing and diffusing the Image pixels, and then a DNA sequence used as a one-time-pad (OTP) to change pixel values. The introduced algorithm shows also perfect security as a result of using OTP and good ability to resist statistical and differential attacks.
Keywords: biocomputing; cryptography; image colour analysis; DNA code; DNA computing; DNA sequence; OTP; chaos theory; chaotic logistic map; colored image encryption algorithm; cryptography; differential attacks; image pixels; one-time-pad; stream cipher algorithm; Abstracts; Ciphers; Computers; DNA; Encryption; Logistics; PSNR; Chaos theory; DNA cryptography; Image Encryption; Logistic map; one time pad OTP; stream Cipher; symmetrical encryption (ID#: 15-4855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031588&isnumber=7031550
Liuyihan Song; Lei Xie; Huifang Chen; Kuang Wang, "A Feedback-Based Secrecy Coding Scheme Using Polar Code over Wiretap Channels," Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on, vol, no., pp. 1, 6, 23-25 Oct. 2014. doi:10.1109/WCSP.2014.6992177
Abstract: Polar codes can be used to achieve secrecy capacity of degraded wiretap channels. In this paper, we propose a feedback-based secrecy coding scheme using polar code over non-degraded wiretap channels. With the feedback architecture, the proposed secrecy coding scheme can significantly obtain a positive secrecy rate. Moreover, polar codes have low complexity of encoding and decoding, which is good for implementing. Simulation results show that the proposed feedback-based secrecy coding scheme using polar code can transmit confidential messages reliably and securely. Moreover, the impact of the conditions of the forward channels and feedback channels on the performance of the proposed secrecy coding scheme are analyzed.
Keywords: channel capacity; channel coding; decoding; feedback; telecommunication network reliability; telecommunication security; decoding; degraded wiretap channel; encoding; feedback architecture; feedback channel; feedback-based secrecy coding scheme; forward channel; nondegraded wiretap channel; polar code; reliability; secure communication; Channel coding; Decoding; Member and Geographic Activities Board committees; Reliability theory; Security; Polar code; feedback; non-degraded wiretap channels; secrecy code (ID#: 15-4856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992177&isnumber=6992003
Bin Dai; Zheng Ma, "Feedback Enhances the Security of Degraded Broadcast Channels with Confidential Messages and Causal Channel State Information," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 411, 415, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970864
Abstract: In this paper, we investigate the degraded broadcast channels with confidential messages (DBC-CM), causal channel state information (CSI), and with or without noiseless feedback. The inner and outer bounds on the capacity-equivocation region are given for the non-feedback mode, and the capacity-equivocation region is determined for the feedback model. We find that by using this noiseless feedback, the achievable rate-equivocation region (inner bound on the capacity-equivocation region) of the DBC-CM with causal CSI is enhanced.
Keywords: broadcast channels; channel capacity; channel coding; feedback telecommunication security; DBC-CM; capacity-equivocation region; channel state information; confidential messages; degraded broadcast channels; noiseless feedback; rate-equivocation region; Decoding; Joints; Random variables; Receivers; Silicon; Transmitters; Zinc; Broadcast channel; channel state information; confidential message; feedback (ID#: 15-4857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970864&isnumber=6970773
Abuzainab, N.; Ephremides, A., "Secure Distributed Information Exchange," Information Theory, IEEE Transactions on, vol. 60, no. 2, pp. 1126, 1135, Feb. 2014. doi:10.1109/TIT.2013.2290992
Abstract: We consider the problem of streaming a file by exchanging information over wireless channels in the presence of an eavesdropper. We utilize private and public channels and wish to minimize the use of the (more expensive) private channel subject to a required level of security. We consider both single and multiple users and compare simple ARQ and deterministic network coding as methods of transmission.
Keywords: automatic repeat request; network coding; wireless channels; deterministic network coding; exchanging information; private channels; public channels; secure distributed information exchange; simple ARQ; wireless channels; Automatic repeat request; Delays; Equations; Fading; Network coding; Security; Vectors; Privacy; QoS; energy efficiency; network coding
(ID#: 15-4858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6665039&isnumber=6714461
Porzio, A., "Quantum Cryptography: Approaching Communication Security from a Quantum Perspective," Photonics Technologies, 2014 Fotonica AEIT Italian Conference on, vol., no., pp. 1, 4, 12-14 May 2014. doi:10.1109/Fotonica.2014.6843831
Abstract: Quantum cryptography aims at solving the everlasting problem of unconditional security in private communication. Every time we send personal information over a telecom channel a sophisticate algorithm protect our privacy making our data unintelligible to unauthorized receivers. These protocols resulted from the long history of cryptography. The security of modern cryptographic systems is guaranteed by complexity: the computational power that would be needed for gaining info on the code key largely exceed available one. Security of actual crypto systems is not “by principle” but “practical”. On the contrary, quantum technology promises to make possible to realize provably secure protocols. Quantum cryptology exploits paradigmatic aspects of quantum mechanics, like superposition principle and uncertainty relations. In this contribution, after a brief historical introduction, we aim at giving a survey on the physical principles underlying the quantum approach to cryptography. Then, we analyze a possible continuous variable protocol.
Keywords: cryptographic protocols; data privacy; quantum cryptography; quantum theory; telecommunication security; code key; computational power; continuous variable protocol; privacy protection; quantum cryptography; quantum cryptology; quantum mechanics; quantum technology; superposition principle; uncertainty relations; unconditional private communication security; Cryptography; History; Switches; TV; Continuous Variable; Quantum cryptography (ID#: 15-4859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843831&isnumber=6843815
Liang Chen, "Secure Network Coding for Wireless Routing," Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 1941,1946, 10-14 June 2014. doi:10.1109/ICC.2014.6883607
Abstract: Nowadays networking is secure because we encrypt the confidential messages with the underlying assumption that adversaries in the network are computationally bounded. For traditional routing or network coding, routers know the contents of the packets they receive. Networking is not secure any more if there are eavesdroppers with infinite computational power at routers. Our concern is whether we can achieve stronger security at routers. This paper proposes secure network coding for wireless routing. Combining channel coding and network coding, this scheme can not only provide physical layer security at wireless routers but also forward data error-free at a high rate. In the paper we prove this scheme can be applied to general networks for secure wireless routing.
Keywords: channel coding; telecommunication network routing; channel coding; forward data error-free; physical layer security; secure network coding; secure wireless routing; Communication system security; Network coding; Protocols; Relays; Routing; Security; Throughput; information-theoretic secrecy; network coding; network information theory; physical-layer security; wireless routing (ID#: 15-4860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883607&isnumber=6883277
Thangaraj, A., "Coding for Wiretap Channels: Channel Resolvability and Semantic Security," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 232, 236, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970827
Abstract: Wiretap channels form the most basic building block of physical-layer and information-theoretic security. Considerable research work has gone into the information-theoretic, cryptographic and coding aspects of wiretap channels in the last few years. The main goal of this tutorial article is to provide a self-contained presentation of two recent results - one is a new and simplified proof for secrecy capacity using channel resolvability, and the other is the connection between semantic security and information-theoretic strong secrecy.
Keywords: channel coding; cryptography; information theory; telecommunication security; channel resolvability; coding aspects; cryptography; information-theoretic security; physical-layer; secrecy capacity; semantic security; wiretap channels coding; Cryptography; Encoding; Semantics; Standards; Zinc (ID#: 15-4861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970827&isnumber=6970773
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Coding Theory and Security, 2014 Part 2 |
Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014.
Okamoto, K.; Homma, N.; Aoki, T.; Morioka, S., "A Hierarchical Formal Approach to Verifying Side-Channel Resistant Cryptographic Processors," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on, vol., no., pp. 76, 79, 6-7 May 2014. doi:10.1109/HST.2014.6855572
Abstract: This paper presents a hierarchical formal verification method for cryptographic processors based on a combination of a word-level computer algebra procedure and a bit-level decision procedure using PPRM (Positive Polarity Reed-Muller) expansion. In the proposed method, the entire datapath structure of a cryptographic processor is described in the form of a hierarchical graph. The correctness of the entire circuit function is verified on this graph representation, by the algebraic method, and the function of each component is verified by the PPRM method, respectively. We have applied the proposed verification method to a complicated AES (Advanced Encryption Standard) circuit with a masking countermeasure against side-channel attack. The results show that the proposed method can verify such practical circuit automatically within 4 minutes while the conventional methods fail.
Keywords: Reed-Muller codes; cryptography; digital arithmetic; formal verification; graph theory; process algebra; AES circuit; PPRM; advanced encryption standard circuit; algebraic method; bit-level decision procedure; circuit function; datapath structure; graph representation; hierarchical formal approach; hierarchical formal verification method; hierarchical graph; positive polarity Reed-Muller expansion; side-channel attack; side-channel resistant cryptographic processors; word-level computer algebra procedure; Algebra; Computers; Cryptography; Polynomials; Program processors; Resistance; Galois fields; arithmetic circuits; cryptographic processors; design methodology for secure hardware; formal design (ID#: 15-4862)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855572&isnumber=6855557
Zamani, S.; Javanmard, M.; Jafarzadeh, N.; Zamani, M., "A Novel Image Encryption Scheme Based on Hyper Chaotic Systems and Fuzzy Cellular Automata," Electrical Engineering (ICEE), 2014 22nd Iranian Conference on, vol., no., pp. 1136, 1141, 20-22 May 2014. doi:10.1109/IranianCEE.2014.6999706
Abstract: A new image encryption scheme based on hyper chaotic system and Fuzzy Cellular Automata is proposed in this paper. Hyper chaotic system has more complex dynamical characteristics than chaos systems. Hence it becomes a better choice for secure image encryption schemes. Four hyper chaotic systems are used to improve the security and speed of the algorithm in this approach. First, the image is divided into four sub images. Each of these sub images has its own hyper chaotic system. In shuffling phase, Pixels in the two adjacent sub images are selected for changing their positions based upon the generated numbers of their hyper chaotic systems. Five 1D non-uniform Fuzzy Cellular Automata used in encryption phase. Used rule to encrypt a cell is selected based upon cell's right neighbor. By utilization of two different encryption methods for odd and even cells, problem of being limited to recursive rules in rule selecting process in these FCAs is solved. The results of implementing this scheme on some images from USC-SIPI database, shows that our method has high security and advantages such as confusion, diffusion, and is sensitive to small changes in key.
Keywords: cellular automata; cryptography; fuzzy set theory; image coding; 1D nonuniform fuzzy cellular automata; FCA; dynamical characteristic; hyperchaotic system; image encryption; rule selecting process; shuffling phase; Automata; Chaos; Correlation; Encryption; Entropy; FCA; Hyper Chaotic System; Image encryption; Lorenz System; Non-uniform Cellular Automata (ID#: 15-4863)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999706&isnumber=6999486
Baheti, A.; Singh, L.; Khan, A.U., "Proposed Method for Multimedia Data Security Using Cyclic Elliptic Curve, Chaotic System, and Authentication Using Neural Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, vol., no., pp. 664, 668, 7-9 April 2014. doi:10.1109/CSNT.2014.139
Abstract: As multimedia applications are used increasingly, security becomes an important issue of security of images. The combination of chaotic theory and cryptography forms an important field of information security. In the past decade, chaos based image encryption is given much attention in the research of information security and a lot of image encryption algorithms based on chaotic maps have been proposed. But, most of them delay the system performance, security, and suffer from the small key space problem. This paper introduces an efficient symmetric encryption scheme based on a cyclic elliptic curve and chaotic system that can overcome these disadvantages. The cipher encrypts 256-bit of plain image to 256-bit of cipher image within eight 32-bit registers. The scheme generates pseudorandom bit sequences for round keys based on a piecewise nonlinear chaotic map. Then, the generated sequences are mixed with the key sequences derived from the cyclic elliptic curve points. The proposed algorithm has good encryption effect, large key space, high sensitivity to small change in secret keys and fast compared to other competitive algorithms.
Keywords: image coding; multimedia computing; neural nets; public key cryptography; authentication; chaos based image encryption; chaotic maps; chaotic system; chaotic theory; competitive algorithms; cryptography; cyclic elliptic curve points; encryption effect; image encryption algorithms; information security; multimedia applications; multimedia data security; neural network; piecewise nonlinear chaotic map; pseudorandom bit sequences; small key space problem; system performance; Authentication; Chaotic communication; Elliptic curves; Encryption; Media; Multimedia communication; authentication; chaos; decryption; encryption; neural network (ID#: 15-4864)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821481&isnumber=6821334
Lashgari, S.; Avestimehr, A.S., "Blind Wiretap Channel with Delayed CSIT," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 36, 40, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874790
Abstract: We consider the Gaussian wiretap channel with a transmitter, a legitimate receiver, and k eavesdroppers (k ∈ ℕ), where the secure communication is aided via a jammer. We focus on the setting where the transmitter and the jammer are blind with respect to the state of channels to eavesdroppers, and only have access to delayed channel state information (CSI) of the legitimate receiver, which is referred to as “blind cooperative wiretap channel with delayed CSIT.” We show that a strictly positive secure Degrees of Freedom (DoF) of 1 over 3 is achievable irrespective of the number of eavesdroppers (k) in the network, and further, 1 over 3 is optimal assuming linear coding strategies at the transmitters. The converse proof is based on two key lemmas. The first lemma, named Rank Ratio Inequality, shows that if two distributed transmitters employ linear strategies, the ratio of the dimensions of received linear sub-spaces at the two receivers cannot exceed 3/2, due to delayed CSI. The second lemma implies that once the transmitters in a network have no CSI with respect to a receiver, the least amount of alignment will occur at that receiver, meaning that transmit signals will occupy the maximal signal dimensions at that receiver. Finally, we show that once the transmitter and the jammer form a single transmitter with two antennas, which we refer to as MISO wiretap channel, 1 over 2 is the optimal secure DoF when using linear schemes.
Keywords: Gaussian channels; jamming; linear codes; radio transceivers; telecommunication security; transmitting antennas; CSI; DoF; Gaussian wiretap channel; MISO wiretap channel; antennas; blind cooperative wiretap channel; communication security; degrees of freedom; delayed CSIT; delayed channel state information; distributed transmitter; eavesdroppers; jammer; key lemmas; legitimate receiver; linear coding strategy; linear subspaces; rank ratio inequality; signal dimensions; transmit signals; Encoding; Jamming; Noise; Receivers; Transmitters; Vectors (ID#: 15-4865)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874790&isnumber=6874773
Kochman, Y.; Ligong Wang; Wornell, G.W., "Toward Photon-Efficient Key Distribution over Optical Channels," Information Theory, IEEE Transactions on, vol. 60, no. 8, pp. 4958, 4972, Aug. 2014. doi:10.1109/TIT.2014.2331060
Abstract: This paper considers the distribution of a secret key over an optical (bosonic) channel in the regime of high photon efficiency, i.e., when the number of secret key bits generated per detected photon is high. While, in principle, the photon efficiency is unbounded, there is an inherent tradeoff between this efficiency and the key generation rate (with respect to the channel bandwidth). We derive asymptotic expressions for the optimal generation rates in the photon-efficient limit, and propose schemes that approach these limits up to certain approximations. The schemes are practical, in the sense that they use coherent or temporally entangled optical states and direct photodetection, all of which are reasonably easy to realize in practice, in conjunction with off-the-shelf classical codes.
Keywords: approximation theory; private key cryptography; quantum cryptography; quantum entanglement; approximations; asymptotic expressions; bosonic channel; channel bandwidth; coherent entangled optical states; direct photodetection; key generation rate; off-the-shelf classical codes; optical channels; optimal generation rates; photon-efficient key distribution; secret key distribution; temporally entangled optical states; Hilbert space; Optical receivers; Optical sensors; Photonics; Protocols; Quantum entanglement; Information-theoretic security; key distribution; optical communication; wiretap channel (ID#: 15-4866)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835214&isnumber=6851961
Fuchun Lin; Cong Ling; Belfiore, J.-C., "Secrecy Gain, Flatness Factor, and Secrecy-Goodness of Even Unimodular Lattices," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 971, 975, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874977
Abstract: Nested lattices Ae ⊂ Ab have previously been studied for coding in the Gaussian wiretap channel and two design criteria, namely, the secrecy gain and flatness factor, have been proposed to study how the coarse lattice Ae should be chosen so as to maximally conceal the message against the eavesdropper. In this paper, we study the connection between these two criteria and show the secrecy-goodness of even unimodular lattices, which means exponentially vanishing flatness factor as the dimension grows.
Keywords: Gaussian channels; encoding; Gaussian wiretap channel; coding; flatness factor; secrecy gain; secrecy-goodness; unimodular lattices; Educational institutions; Encoding; Lattices; Security; Vectors; Zinc (ID#: 15-4867)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874977&isnumber=6874773
Renna, F.; Laurenti, N.; Tomasin, S., "Achievable Secrecy Rates over MIMOME Gaussian Channels with GMM Signals in Low-Noise Regime," Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934464
Abstract: We consider a wiretap multiple-input multiple-output multiple-eavesdropper (MIMOME) channel, where agent Alice aims at transmitting a secret message to agent Bob, while leaking no information on it to an eavesdropper agent Eve. We assume that Alice has more antennas than both Bob and Eve, and that she has only statistical knowledge of the channel towards Eve. We focus on the low-noise regime, and assess the secrecy rates that are achievable when the secret message determines the distribution of a multivariate Gaussian mixture model (GMM) from which a realization is generated and transmitted over the channel. In particular, we show that if Eve has fewer antennas than Bob, secret transmission is always possible at low-noise. Moreover, we show that in the low-noise limit the secrecy capacity of our scheme coincides with its unconstrained capacity, by providing a class of covariance matrices that allow to attain such limit without the need of wiretap coding.
Keywords: Gaussian channels; Gaussian processes; MIMO communication; covariance matrices; GMM signals; MIMOME Gaussian channels; achievable secrecy rates; covariance matrices; low-noise regime; multivariate Gaussian mixture model; secrecy capacity; secret transmission; statistical knowledge; wiretap multiple-input multiple-output multiple-eavesdropper channel; Antennas; Covariance matrices; Encoding; Entropy; Gaussian distribution; Signal to noise ratio; Vectors; Physical Layer Security; Secrecy Capacity; multiple-input multiple-output multiple-eavesdropper (MIMOME) Channels (ID#: 15-4868)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934464&isnumber=6934393
James, S.P.; George, S.N.; Deepthi, P.P., "An Audio Encryption Technique Based on LFSR Based Alternating Step Generator," Electronics, Computing and Communication Technologies (IEEE CONECCT), 2014 IEEE International Conference on, vol., no., pp. 1, 6, 6-7 Jan. 2014. doi:10.1109/CONECCT.2014.6740185
Abstract: In this paper, a novel method of encrypting the encoded audio data based on LFSR based key stream generators is presented. The alternating step generator (ASG) is selected as keystream generator used for this application. Since the ASG is vulnerable to improved linear consistency attack, it is proposed to incorporate some nonlinearity with the stop/go LFSRs of the ASG so that the modified ASG can withstand the same. In the proposed approach, the selected bits of each frame of the encoded audio data is encrypted with the keystream generated by the modified ASG. In order to overcome known plaintext attack, it is proposed to use different keystreams for different frames of the audio data. The long keystream generated from the modified ASG is divided into smaller keystreams so that it can be used as the different keystreams for different frames of audio data. The proposed encryption approach can be applied to any audio coding system, which maintains the standard compatibility. The number of encrypted bits control the degree of degradation of the audio quality. The performance of proposed encryption method is verified with MP3 coded audio data and is proved that it can provide better security than the existing ones with very less system complexity.
Keywords: audio coding; cryptography; ASG; LFSR-based alternating step generator; LFSR-based key stream generators; MP3 coded audio data; alternating step generator; audio coding system; audio data frames; audio quality; encoded audio data encryption technique; improved linear consistency attack; known plaintext attack; modified ASG; standard compatibility; stop-go LFSR; system complexity; Clocks; Complexity theory; Cryptography; Filtering; Linearity; Zinc (ID#: 15-4869)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740185&isnumber=6740167
Pujari, V.G.; Khot, S.R.; Mane, K.T., "Enhanced Visual Cryptography Scheme for Secret Image Retrieval Using Average Filter," Wireless Computing and Networking (GCWCN), 2014 IEEE Global Conference on, vol., no., pp. 88, 91, 22-24 Dec. 2014. doi:10.1109/GCWCN.2014.7030854
Abstract: Visual cryptography is one of the emerging technology which has been used for sending secret images in highly secure manner without performing the complex operations while encoding. This technology can be used in the many fields like transferring military data, financial scan documents, sensitive image data and so on. In the literature different methods are used for black and white image which produce good result but for color images the quality of the decoded secret image is not good. In this paper, the system has been proposed which increase the quality of color decoded image. In this system sender takes one secret image which is encoded into n share images using Jarvis halftoning and encoding table. For decoding, the share images are used with decoding table to get original secret image. The average filter has been applied to decrease the noise introduced between encoding operation so that decoded secret image quality has been increased. The result analysis has been made by considering various image quality analysis parameters such as MSE, PSNR, SC, NAE and so on. The results are better than previous systems which are mentioned in the literature.
Keywords: cryptography; filtering theory; image coding; image colour analysis; image denoising; image retrieval; Jarvis halftoning; average filter; black image; color decoded image quality; decoded secret image quality; encoding table; enhanced visual cryptography scheme; image quality analysis parameters; secret image retrieval; white image; Cryptography; Decoding; Image coding; Image color analysis; Image quality; Noise; Visualization; Average filter; Color halftoning; Decoding; Encoding; Jarvis error diffusion; Security; Visual cryptography (ID#: 15-4870)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030854&isnumber=7030833
Wentao Huang; Ho, T.; Langberg, M.; Kliewer, J., "Reverse Edge Cut-Set Bounds for Secure Network Coding," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 106, 110, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874804
Abstract: We consider the problem of secure communication over a network in the presence of wiretappers. We give a new cut-set bound on secrecy capacity which takes into account the contribution of both forward and backward edges crossing the cut, and the connectivity between their endpoints in the rest of the network. We show the bound is tight on a class of networks, which demonstrates that it is not possible to find a tighter bound by considering only cut-set edges and their connectivity.
Keywords: network coding; telecommunication security; cut-set edges; reverse edge cut-set bounds; secrecy capacity; secure communication; secure network coding; wiretappers; Delays; Entropy; Mutual information; Network coding; Unicast; Upper bound (ID#: 15-4871)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874804&isnumber=6874773
Kosut, O.; Lang Tong; Tse, D.N.C., "Polytope Codes Against Adversaries in Networks," Information Theory, IEEE Transactions on, vol. 60, no. 6, pp. 3308, 3344, June 2014. doi:10.1109/TIT.2014.2314642
Abstract: This paper investigates a network coding problem wherein an adversary controls a subset of nodes in the network of limited quantity but unknown location. This problem is shown to be more difficult than that of an adversary controlling a given number of edges in the network, in that linear codes are insufficient. To solve the node problem, the class of polytope codes is introduced. Polytope codes are constant composition codes operating over bounded polytopes in integer vector fields. The polytope structure creates additional complexity, but it induces properties on marginal distributions of code vectors so that validities of codewords can be checked by internal nodes of the network. It is shown that polytope codes achieve a cut-set bound for a class of planar networks. It is also shown that this cut-set bound is not always tight, and a tighter bound is given for an example network.
Keywords: linear codes; network coding; adversary controlling; adversary controls; code vectors; codewords; constant composition codes; integer vector fields; internal nodes; linear codes; network adversaries; network coding problem; polytope codes; Educational institutions; Linear codes; Network coding; Upper bound; Vectors; Xenon; Active adversaries; Byzantine attack; network coding; network error correction; nonlinear codes; polytope codes; security (ID#: 15-4872)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781646&isnumber=6816018
Xiang He; Yener, A., "Providing Secrecy with Structured Codes: Two-User Gaussian Channels," Information Theory, IEEE Transactions on, vol. 60, no. 4, pp. 2121, 2138, April 2014. doi:10.1109/TIT.2014.2298132
Abstract: Recent results have shown that structured codes can be used to construct good channel codes, source codes, and physical layer network codes for Gaussian channels. For Gaussian channels with secrecy constraints, however, efforts to date rely on Gaussian random codes. In this paper, we advocate that structure in random code generation is useful for providing secrecy as well. In particular, a Gaussian wiretap channel in the presence of a cooperative jammer is studied. Previously, the achievable secrecy rate for this channel was derived using Gaussian signaling, which saturated at high signal-to-noise ratio (SNR), owing to the fact that the cooperative jammer simultaneously helps by interfering with the eavesdropper, and hurts by interfering with the intended receiver. In this paper, a new achievable rate is derived through imposing a lattice structure on the signals transmitted by both the source and the cooperative jammer, which are aligned at the eavesdropper but remain separable at the intended receiver. We prove that the achieved secrecy rate does not saturate at high SNR for all values of channel gains except when the channel is degraded.
Keywords: Gaussian channels; cooperative communication; jamming; random codes; telecommunication security; Gaussian channels; Gaussian random codes; Gaussian signaling; Gaussian wiretap channel; channel codes; cooperative jammer; eavesdropper; lattice structure; physical layer network codes; random code generation; secrecy constraints; source codes; structured codes; Channel models; Encoding; Jamming; Lattices; Receivers; Transmitters; Vectors; Gaussian wiretap channels; Information theoretic secrecy; cooperative jamming; lattice codes (ID#: 15-4873)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702446&isnumber=6766686
Boche, H.; Schaefer, R.F.; Poor, H.V., "On Arbitrarily Varying Wiretap Channels for Different Classes of Secrecy Measures," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2376, 2380, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875259
Abstract: The wiretap channel models secure communication in the presence of an eavesdropper who must be kept ignorant of transmitted messages. In this paper, the arbitrarily varying wiretap channel (AVWC), in which the channel may vary in an unknown and arbitrary manner from channel use to channel use, is considered. For arbitrarily varying channels (AVCs) the capacity might differ depending on whether deterministic or common randomness (CR) assisted codes are used. The AVWC has been studied for both coding strategies and the relation between the corresponding secrecy capacities has been established. However, a characterization of the CR-assisted secrecy capacity itself or even a general CR-assisted achievable secrecy rate remain open in general for weak and strong secrecy. Here, the secrecy measure of high decoding error at the eavesdropper is considered, where the eavesdropper is further assumed to know channel states and to adapt its decoding strategy accordingly. For this secrecy measure a general CR-assisted achievable secrecy rate is established. The relation between secrecy capacities for different secrecy measures is discussed: The weak and strong secrecy capacities are smaller than or equal to the one for high decoding error. It is conjectured that this relation can be strict for certain channels.
Keywords: channel coding; decoding; telecommunication security; AVWC; CR-assisted achievable secrecy rate; CR-assisted secrecy capacity; arbitrarily varying wiretap channels; common randomness assisted codes; decoding error; secrecy measures; secure communication; Compounds; Decoding; Measurement uncertainty; Robustness; Security; Tin (ID#: 15-4874)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875259&isnumber=6874773
Fan Cheng; Yeung, R.W., "Performance Bounds on a Wiretap Network with Arbitrary Wiretap Sets," Information Theory, IEEE Transactions on, vol. 60, no.6, pp. 3345, 3358, June 2014. doi:10.1109/TIT.2014.2315821
Abstract: Consider a communication network represented by a directed graph G = (V, ε), where V is the set of nodes and 8 is the set of point-to-point channels in the network. On the network, a secure message M is transmitted, and there may exist wiretappers who want to obtain information about the message. In secure network coding, we aim to find a network code, which can protect the message against the wiretapper whose power is constrained. Cai and Yeung studied the model in which the wiretapper can access any one but not more than one set of channels, called a wiretap set, out of a collection A of all possible wiretap sets. In order to protect the message, the message needs to be mixed with a random key K. They proved tight fundamental performance bounds when A consists of all subsets of ε of a fixed size r. However, beyond this special case, obtaining such bounds is much more difficult. In this paper, we investigate the problem when A consists of arbitrary subsets of ε and obtain the following results: 1) an upper bound on H(M) and 2) a lower bound on H(K) in terms of H(M). The upper bound on H(M) is explicit, while the lower bound on H(K) can be computed in polynomial time when |A| is fixed. The tightness of the lower bound for the point-to-point communication system is also proved.
Keywords: network coding; polynomials; radio networks; telecommunication security; Cai; Yeung; arbitrary wiretap sets; communication network; network code; performance bounds; point-to-point channels; polynomial time; random key; secure message; secure network coding; wiretap network; wiretapper; Cryptography; Encoding; Entropy; Network coding; Receivers; Upper bound; Information inequality; perfect secrecy; performance bounds; secure network coding (ID#: 15-4875)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783737&isnumber=6816018
Bin Duo; Peng Wang; Yonghui Li; Vucetic, B., "Secure Transmission for Relay-Eavesdropper Channels Using Polar Coding," Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 2197, 2202, 10-14 June 2014. doi:10.1109/ICC.2014.6883649
Abstract: In this paper, we propose a practical transmission scheme using polar coding for the half-duplex degraded relay-eavesdropper channel. We prove that the proposed scheme can achieve the maximum perfect secrecy rate under the decode-and-forward (DF) strategy. Our proposed scheme provides an approach for ensuring both reliable and secure transmission over the relay-eavesdropper channel while enjoying practically feasible encoding/decoding complexity.
Keywords: channel coding; decode and forward communication; decoding; reliability; telecommunication security; wireless channels; DF strategy; decode and forward strategy; decoding complexity; half-duplex degraded relay eavesdropper channel; polar coding; reliable transmission; secure transmission; Complexity theory; Decoding; Encoding; Relays ;Reliability; Variable speed drives; Vectors (ID#: 15-4876)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883649&isnumber=6883277
Loyka, S.; Charalambous, C.D., "Rank-Deficient Solutions for Optimal Signaling over Secure MIMO Channels," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 201, 205, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874823
Abstract: Capacity-achieving signaling strategies for the Gaussian wiretap MIMO channel are investigated without the degradedness assumption. In addition to known solutions, a number of new rank-deficient solutions for the optimal transmit covariance matrix are obtained. The case of weak eavesdropper is considered in details and the optimal covariance is established in an explicit, closed-form with no extra assumptions. The conditions for optimality of zero-forcing signaling are established, and the standard water-filling is shown to be optimal under those conditions. No wiretap codes are needed in this case. The case of identical right singular vectors for the required and eavesdropper channels is studied and the optimal covariance is established in an explicit closed form. As a by-product of this analysis, we establish a generalization of celebrated Hadamard determinantal inequality using information-theoretic tools.
Keywords: Gaussian channels; MIMO communication; covariance matrices; telecommunication security; telecommunication signalling; Gaussian wiretap MIMO channel; capacity-achieving signaling strategies; celebrated Hadamard determinantal inequality; eavesdropper channels; identical right singular vectors; information-theoretic tools; optimal covariance; optimal signaling; optimal transmit covariance matrix; rank-deficient solutions; secure MIMO channels; standard water-filling; weak eavesdropper; wiretap codes; zero-forcing signaling; Approximation methods; Covariance matrices; Information theory; MIMO; Signal to noise ratio; Standards; Vectors (ID#: 15-4877)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874823&isnumber=6874773
Cong Ling; Luzzi, L.; Belfiore, J.-C.; Stehle, D., "Semantically Secure Lattice Codes for the Gaussian Wiretap Channel," Information Theory, IEEE Transactions on, vol. 60, no.10, pp. 6399, 6416, Oct. 2014. doi:10.1109/TIT.2014.2343226
Abstract: We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor, which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in our proposed schemes.
Keywords: Gaussian channels; codes; telecommunication security; Gaussian wiretap channel; conditional output distribution; discrete Gaussian distribution; flatness factor; genuine Gaussian channel; information leakage; modulo lattice Gaussian channel; secrecy coding; secrecy good lattice; semantically secure lattice codes; wiretap lattice coding; Encoding; Gaussian distribution; Lattices; Mutual information; Security; Semantics; Zinc; Lattice coding; information theoretic security; semantic security; strong secrecy; wiretap channel (ID#: 15-4878)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866169&isnumber=6895347
Mirghasemi, H.; Belfiore, J.-C., "The Semantic Secrecy Rate of the Lattice Gaussian Coding for the Gaussian Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp.112,116, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970803
Abstract: In this paper, we investigate the achievable semantic secrecy rate of existing lattice coding schemes, proposed in [6], for both the mod-Λ Gaussian wiretap and the Gaussian wiretap channels. For both channels, we propose new upper bounds on the amount of leaked information which provide milder sufficient conditions to achieve semantic secrecy. These upper bounds show that the lattice coding schemes in [6] can achieve the secrecy capacity to within ½ln e/2 nat for the mod-Λ Gaussian and to within ½(1 - ln (1 + SNRe / SNRe+1)) nat for the Gaussian wiretap channels where SNRe is the signal-to-noise ratio of Eve.
Keywords: Gaussian channels; channel capacity; data privacy; wireless channels; Gaussian wiretap channels; SNRe; lattice coding schemes; mod-Λ Gaussian wiretap; secrecy capacity; semantic secrecy rate; signal-to-noise ratio of Eve; Encoding; Gaussian distribution; Lattices; Security; Semantics; Upper bound; Zinc (ID#: 15-4879)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970803&isnumber=6970773
Mirghasemi, H.; Belfiore, J.-C., "The Un-Polarized Bit-Channels in the Wiretap Polar Coding Scheme," Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp.1 ,5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934465
Abstract: Polar coding theorems state that as the number of channel use, n, tends to infinity, the fraction of un-polarized bit-channels (the bit-channels whose Z parameters are in the interval (δ(n), 1- δ (n)), tends to zero. Consider two BEC channels W(z1) and W(z2). Motivated by polar coding scheme proposed for the wiretap channel, we investigate the number of bit-channels which are simultaneously un-polarized for both of W(z1) and W(z2). We show that for finite values of n, there is a considerable regime of (z1, Z2) where the set of (joint)un-polarized bit-channels is empty. We also show that for γ ≤ 1/2 and δ (n) = 2-nγ, the number of un-polarized bit-channels is lower bounded by 2γ log (n).
Keywords: encoding; security of data; Z-parameter; channel use number; unpolarized bit channel; wiretap channel; wiretap polar coding scheme; Decoding; Encoding; Mutual information; Noise measurement; Reliability; Security; Vectors (ID#: 15-4880)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934465&isnumber=6934393
Tomamichel, M.; Martinez-Mateo, J.; Pacher, C.; Elkouss, D., "Fundamental Finite Key Limits for Information Reconciliation in Quantum Key Distribution," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1469, 1473, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875077
Abstract: The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during one-way information reconciliation is flawed and we propose an improved estimate.
Keywords: cryptographic protocols; information theory; private key cryptography; quantum cryptography; quantum theory; QKD protocols; classical cryptography; fundamental finite key limits; information reconciliation; nonasymptotic classical information theory; quantum key distribution protocols; quantum mechanics security; secret key; Approximation methods; Error analysis; Parity check codes; Protocols; Quantum mechanics; Security (ID#: 15-4881)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875077&isnumber=6874773
Merhav, N., "Exact Correct-Decoding Exponent of the Wiretap Channel Decoder," Information Theory, IEEE Transactions on, vol. 60, no. 12, pp. 7606, 7615, Dec. 2014. doi:10.1109/TIT.2014.2361765
Abstract: The performance of the achievability scheme for Wyner's wiretap channel model is examined from the perspective of the probability of correct decoding, Pc, at the wiretap channel decoder. In particular, for finite-alphabet memoryless channels, the exact random coding exponent of Pc is derived as a function of the total coding rate R1 and the rate of each subcode R2. Two different representations are given for this function and its basic properties are provided. We also characterize the region of pairs of rates (R1, R2) of full security in the sense of the random coding exponent of Pc, in other words, the region where the exponent of this achievability scheme is the same as that of blind guessing at the eavesdropper side. Finally, an analogous derivation of the correct-decoding exponent is outlined for the case of the Gaussian channel.
Keywords: Gaussian channels; channel coding; decoding; probability; random codes; Gaussian channel; Wyner wiretap channel model; blind guessing; correct decoding probability; finite-alphabet memoryless channels; random coding exponent; Decoding; Encoding; Random variables; Receivers; Reliability; Security; Vectors; Wiretap channel; information–theoretic security ;information theoretic security; random coding exponent; secrecy (ID#: 15-4882)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918525&isnumber=6960944
Yingbin Liang; Lifeng Lai; Poor, H.V.; Shamai, S., "A Broadcast Approach for Fading Wiretap Channels," Information Theory, IEEE Transactions on, vol. 60, no.2, pp. 842, 858, Feb. 2014. doi:10.1109/TIT.2013.2293756
Abstract: A (layered) broadcast approach is studied for the fading wiretap channel without the channel state information (CSI) at the transmitter. Two broadcast schemes, based on superposition coding and embedded coding, respectively, are developed to encode information into a number of layers and use stochastic encoding to keep the corresponding information secret from an eavesdropper. The layers that can be successfully and securely transmitted are determined by the channel states to the legitimate receiver and the eavesdropper. The advantage of these broadcast approaches is that the transmitter does not need to know the CSI to the legitimate receiver and the eavesdropper, but the scheme still adapts to the channel states of the legitimate receiver and the eavesdropper. Three scenarios of block fading wiretap channels with stringent delay constraints are studied, in which either the legitimate receiver's channel, the eavesdropper's channel, or both channels are fading. For each scenario, the secrecy rate that can be achieved via the broadcast approach developed in this paper is derived, and the optimal power allocation over the layers (or the conditions on the optimal power allocation) is also characterized. A notion of probabilistic secrecy, which characterizes the probability that a certain secrecy rate of decoded messages is achieved during one block, is also introduced and studied for scenarios when the eavesdropper's channel is fading. Numerical examples are provided to demonstrate the impact of the CSI at the transmitter and the channel fluctuations of the eavesdropper on the average secrecy rate. These examples also demonstrate the advantage of the proposed broadcast approach over the compound channel approach.
Keywords: broadcast channels; decoding; embedded systems; encoding; fading channels; radio receivers; radio transmitters; resource allocation; channel fluctuations; channel states; decoded messages; eavesdropper channel; embedded coding; fading wiretap channels; layered broadcast approach; legitimate receiver; optimal power allocation; receiver channel; stochastic encoding; superposition coding; transmitter; Encoding; Fading; Indexes; Receivers; Resource management; Security; Transmitters; Channel state information; fading channel; layered broadcast approach; secrecy rate; wiretap channel (ID#: 15-4883)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687232&isnumber=6714461
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Coding Theory and Security, 2014 Part 3 |
Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014.
Xuan Guang; Jiyong Lu; Fang-Wei Fu, "Locality-Preserving Secure Network Coding," Information Theory Workshop (ITW), 2014 IEEE, vol. no., pp. 396, 400, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970861
Abstract: In the paradigm of network coding, when wiretapping attacks occur, secure network coding is introduced to prevent information leaking adversaries. In practical network communications, the source often multicasts messages at several different rates within a session. How to deal with information transmission and information security simultaneously under variable rates and fixed security-level is introduced in this paper as a variable-rate and fixed-security-level secure network coding problem. In order to solve this problem effectively, we propose the concept of locality-preserving secure linear network codes of different rates and fixed security-level, which have the same local encoding kernel at each internal node. We further present an approach to construct such a family of secure linear network codes and give an algorithm for efficient implementation. This approach saves the storage space for both source node and internal nodes, and resources and time on networks. Finally, the performance of the proposed algorithm is analyzed, including the field size, computational and storage complexities.
Keywords: linear codes; network coding; telecommunication security; variable rate codes; fixed-security-level secure network coding problem; information security; information transmission; internal nodes; local encoding kernel; locality-preserving secure linear network codes; source node; variable-rate secure network coding problem; wiretapping attacks; Complexity theory; Decoding; Encoding; Information rates; Kernel; Network coding; Vectors (ID#: 15-4884)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970861&isnumber=6970773
Watanabe, S.; Oohama, Y., "Cognitive Interference Channels with Confidential Messages Under Randomness Constraint," Information Theory, IEEE Transactions on, vol. 60, no. 12, pp. 7698, 7707, Dec. 2014. doi:10.1109/TIT.2014.2360683
Abstract: The cognitive interference channel with confidential messages (CICC) proposed by Liang et al. is investigated. When the security is considered in coding systems, it is well-known that the sender needs to use a stochastic encoding to avoid the information about the transmitted confidential message to be leaked to an eavesdropper. For the CICC, the tradeoff between the rate of the random number to realize the stochastic encoding and the communication rates is investigated, and the optimal tradeoff is completely characterized.
Keywords: Decoding; Encoding; Interference channels; Random variables; Receivers; Security; Tin; Cognitive Interference Channel; Cognitive interference channel; Confidential Messages; Randomness Constraint; Stochastic Encoder; Superposition Coding; confidential messages; randomness constraint; stochastic encoder; superposition coding (ID#: 15-4885)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928480&isnumber=6960944
Muramatsu, J., "General Formula for Secrecy Capacity of Wiretap Channel with Noncausal State," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 21, 25, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874787
Abstract: The coding problem for a wiretap channel with a noncausal state is investigated, where the problem includes the coding problem for a channel with a noncausal state, which is known as the Gel'fand-Pinsker problem, and the coding problem for a wiretap channel introduced by Wyner. The secrecy capacity for this channel is derived, where an optimal code is constructed based on the hash property and a constrained-random-number generator. Since an ensemble of sparse matrices has a hash property, the rate of the proposed code using a sparse matrix can achieve the secrecy capacity.
Keywords: encoding; random number generation; security of data; Gel'fand-Pinsker problem; coding problem; constrained random number generator; hash property; noncausal state; optimal code; secrecy capacity; wiretap channel; Decoding; Encoding; Manganese; Random variables; Tin; Zinc (ID#: 15-4886)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874787&isnumber=6874773
Yardi, A.D.; Kumar, A.; Vijayakumaran, S., "Channel-Code Detection by a Third-Party Receiver via the Likelihood Ratio Test," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1051, 1055, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874993
Abstract: Channel codebook detection is of interest in cognitive paradigm or security applications. A binary hypothesis testing problem is considered, where a receiver has to detect the channel-code from two possible choices upon observing noise-affected codewords through a communication channel. For analytical tractability, it is assumed that the two channel-codes are linear block codes with identical block-length. In a first, this work studies the likelihood ratio test for minimizing the error probability in this detection problem. In an asymptotic setting, where a large number of noise-affected codewords are available for detection, the Chernoff information characterizes the error probability. A lower bound on the Chernoff information, based on the parameters of the two hypothesis, is established. Further, it is shown that if likelihood based efficient (generalized distributive law or BCJR) bit-decoding algorithms are available for the two codes, then the likelihood ratio test for the code-detection problem can be performed in a computationally feasible manner.
Keywords: block codes; channel coding; cognitive radio; error statistics; linear codes; statistical analysis; telecommunication security; BCJR; Chernoff information; analytical tractability; binary hypothesis testing problem; bit-decoding algorithms; channel codebook detection; cognitive paradigm; communication channel; error probability minimization; generalized distributive law; identical block-length; likelihood ratio test; linear block codes; noise-affected codewords; security application; third-party receiver; Block codes; Error probability; Noise; Receivers; Testing; Vectors (ID#: 15-4887)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874993&isnumber=6874773
Villard, J.; Piantanida, P.; Shamai, S., "Secure Transmission of Sources over Noisy Channels with Side Information at the Receivers," Information Theory, IEEE Transactions on, vol. 60, no. 1, pp. 713, 739, Jan. 2014. doi:10.1109/TIT.2013.2288256
Abstract: This paper investigates the problem of source-channel coding for secure transmission with arbitrarily correlated side informations at both receivers. This scenario consists of an encoder (referred to as Alice) that wishes to compress a source and send it through a noisy channel to a legitimate receiver (referred to as Bob). In this context, Alice must simultaneously satisfy the desired requirements on the distortion level at Bob and the equivocation rate at the eavesdropper (referred to as Eve). This setting can be seen as a generalization of the problems of secure source coding with (uncoded) side information at the decoders and the wiretap channel. A general outer bound on the rate-distortion-equivocation region, as well as an inner bound based on a pure digital scheme, is derived for arbitrary channels and side informations. In some special cases of interest, it is proved that this digital scheme is optimal and that separation holds. However, it is also shown through a simple counterexample with a binary source that a pure analog scheme can outperform the digital one while being optimal. According to these observations and assuming matched bandwidth, a novel hybrid digital/analog scheme that aims to gather the advantages of both digital and analog ones is then presented. In the quadratic Gaussian setup when side information is only present at the eavesdropper, this strategy is proved to be optimal. Furthermore, it outperforms both digital and analog schemes and cannot be achieved via time-sharing. Through an appropriate coding, the presence of any statistical difference among the side informations, the channel noises, and the distortion at Bob can be fully exploited in terms of secrecy.
Keywords: Gaussian channels; combined source-channel coding; receivers; telecommunication security; binary source; channel noise; eavesdropper; encoder; hybrid digital-analog scheme; quadratic Gaussian setup; rate-distortion-equivocation region; receiver; source coding security; source-channel coding; statistical difference; time-sharing; transmission security; wiretap channel; Channel coding; Decoding; Noise measurement; Radio frequency; Random variables; Source coding; Combined source-channel coding; Gaussian channels; information security; rate-distortion; side information (ID#: 15-4888)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651774&isnumber=6690264
Muxi Yan; Sprintson, A.; Zelenko, I., "Weakly Secure Data Exchange with Generalized Reed Solomon Codes," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1366, 1370, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875056
Abstract: We focus on secure data exchange among a group of wireless clients. The clients exchange data by broadcasting linear combinations of packets over a lossless channel. The data exchange is performed in the presence of an eavesdropper who has access to the channel and can obtain all transmitted data. Our goal is to develop a weakly secure coding scheme that prevents the eavesdropper from being able to decode any of the original packets held by the clients. We present a randomized algorithm based on Generalized Reed-Solomon (GRS) codes. The algorithm has two key advantages over the previous solutions: it operates over a small (polynomial-size) finite field and provides a way to verify that constructed code is feasible. In contrast, the previous approaches require exponential field size and do not provide an efficient (polynomial-time) algorithm to verify the secrecy properties of the constructed code. We formulate an algebraic-geometric conjecture that implies the correctness of our algorithm and prove its validity for special cases. Our simulation results indicate that the algorithm is efficient in practical settings.
Keywords: Reed-Solomon codes; algebra; broadcast channels; electronic data interchange; geometry; security of data; telecommunication security; wireless channels; GRS codes; algebraic-geometric conjecture; eavesdropper prevention; exponential field size; finite field; generalized Reed Solomon codes; lossless broadcast channel; weakly secure coding scheme; weakly secure data exchange problem; wireless clients; Encoding; Network coding; Polynomials; Reed-Solomon codes; Silicon; Vectors (ID#: 15-4889)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875056&isnumber=6874773
Geil, O.; Martin, S.; Matsumoto, R.; Ruano, D.; Yuan Luo, "Relative Generalized Hamming Weights of One-Point Algebraic Geometric Codes," Information Theory, IEEE Transactions on, vol. 60, no. 10, pp. 5938, 5949, Oct. 2014. doi:10.1109/TIT.2014.2345375
Abstract: Security of linear ramp secret sharing schemes can be characterized by the relative generalized Hamming weights of the involved codes. In this paper, we elaborate on the implication of these parameters and devise a method to estimate their value for general one-point algebraic geometric codes. As it is demonstrated, for Hermitian codes, our bound is often tight. Furthermore, for these codes, the relative generalized Hamming weights are often much larger than the corresponding generalized Hamming weights.
Keywords: Hamming codes; algebraic geometric codes; cryptography; Hermitian codes; cryptographic method; general one-point algebraic geometric codes; linear ramp secret sharing schemes; relative generalized Hamming weights; Cryptography; Electronic mail; Hamming weight; Linear codes; Materials; Random variables; Vectors; Feng-Rao bound; Hermitian code; Linear code; one-point algebraic geometric code; relative dimension/length profile; relative generalized Hamming weight; secret sharing; wiretap channel of type II (ID#: 15-4890)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871379&isnumber=6895347
Tyagi, H.; Vardy, A., "Explicit Capacity-Achieving Coding Scheme for the Gaussian Wiretap Channel," Information Theory (ISIT), 2014 IEEE International Symposium on, vol. no., pp. 956, 960, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874974
Abstract: We extend the Bellare-Tessaro coding scheme for a discrete, degraded, symmetric wiretap channel to a Gaussian wiretap channel. Denoting by SNR the signal-to-noise ratio of the eavesdropper's channel, the proposed scheme converts a transmission code of rate R for the channel of the legitimate receiver into a code of rate R-0.5 log(1+SNR) for the Gaussian wiretap channel. The conversion has a polynomial complexity in the codeword length and the proposed scheme achieves strong security. In particular, when the underlying transmission code is capacity achieving, this scheme achieves the secrecy capacity of the Gaussian wiretap channel.
Keywords: Gaussian channels; channel capacity; channel coding; telecommunication security; Bellare-Tessaro coding; Gaussian wiretap channel; degraded wiretap channel; discrete wiretap channel; explicit capacity-achieving coding; secrecy capacity; symmetric wiretap channel; transmission code; Cryptography; Encoding; Receivers; Reliability; Zinc (ID#: 15-4891)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874974&isnumber=6874773
Matsumoto, R., "New Asymptotic Metrics for Relative Generalized Hamming Weight," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 3142, 3144, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875413
Abstract: It was recently shown that RGHW (relative generalized Hamming weight) exactly expresses the security of linear ramp secret sharing scheme. In this paper we determine the true value of the asymptotic metric for RGHW previously proposed by Zhuang et al. in 2013. Then we propose new asymptotic metrics useful for investigating the optimal performance of linear ramp secret sharing scheme constructed from a pair of linear codes. We also determine the true values of the proposed metrics in many cases.
Keywords: Hamming codes; cryptography; linear codes; RGHW; asymptotic metrics; linear codes; linear ramp secret sharing scheme; relative generalized Hamming weight; Cryptography; Equations; Hamming weight; Information rates; Linear codes; Measurement (ID#: 15-4892)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875413&isnumber=6874773
Khisti, A.; Tie Liu, "Private Broadcasting over Independent Parallel Channels," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5173, 5187, Sept. 2014. doi:10.1109/TIT.2014.2332336
Abstract: We study broadcasting of two confidential messages to two groups of receivers over independent parallel subchannels. One group consists of an arbitrary number of receivers, interested in a common message, whereas the other group has only one receiver. Each message must be confidential from the receiver(s) in the other group. Each of the subchannels is assumed to be degraded in a certain fashion. While corner points of the capacity region of this setup were characterized in earlier works, we establish the complete capacity region, and show the optimality of a superposition coding technique. For Gaussian channels, we establish the optimality of a Gaussian input distribution by applying an extremal information inequality. By extending our coding scheme to block-fading channels, we demonstrate significant performance gains over a baseline time-sharing scheme.
Keywords: Gaussian channels; block codes; data privacy; fading channels; radio receivers; telecommunication security; wireless channels; Gaussian channels; Gaussian input distribution; baseline time sharing scheme; block fading channels; coding scheme; extremal information inequality; independent parallel channels; independent parallel subchannels; private broadcasting; receivers; superposition coding technique; Broadcasting; Channel models; Coherence; Encoding; Fading; Indexes; Receivers; Wiretap channel; parallel channels; private broadcasting; secrecy capacity; superposition coding (ID#: 15-4893)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841612&isnumber=6878505
Mirzaee, M.; Akhlaghi, S., "Maximizing the Minimum Achievable Secrecy Rate in a Two-User Gaussian Interference Channel," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp. 1, 5, 7 – 8 May 2014. doi:10.1109/IWCIT.2014.6842501
Abstract: This paper studies a two-user Gaussian interference channel in which two single-antenna sources aim at sending their confidential messages to the legitimate destinations such that each message should be kept confidential from non-intended receiver. Also, it is assumed that the direct channel gains are stronger than the interference channel gains and the noise variances at two destinations are equal. In this regard, under Gaussian code book assumption, the problem of secrecy rate balancing which aims at exploring the optimal power allocation policy at the sources in an attempt to maximize the minimum achievable secrecy rate is investigated, assuming each source is subject to a transmit power constraint. To this end, it is shown that at the optimal point, two secrecy rates are equal, hence, the problem is abstracted to maximizing the secrecy rate associated with one of destinations while the other destination is restricted to have the same secrecy rate. Accordingly, the optimum secrecy rate associated with the investigated max-min problem is analytically derived leading to the solution of secrecy rate balancing problem.
Keywords: Gaussian channels; antennas; interference (signal); telecommunication security; Gaussian code book assumption; achievable secrecy rate; direct channel gains; interference channel gains; max-min problem; noise variances; nonintended receiver; optimal power allocation policy; secrecy rate balancing; single-antenna sources; transmit power constraint; two-user Gaussian interference channel; Array signal processing; Gain; Interference channels; Linear programming; Noise; Optimization; Transmitters; Achievable secrecy rate; Gaussian interference channel; Max-Min problem (ID#: 15-4894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842501&isnumber=6842477
Pengwei Wang; Safavi-Naini, R., "An Efficient Code for Adversarial Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 40, 44, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970788
Abstract: In the (ρr, ρw)-adversarial wiretap (AWTP) channel model of [13], a codeword sent over the communication channel is corrupted by an adversary who observes a fraction ρr of the codeword, and adds noise to a fraction ρw of the codeword. The adversary is adaptive and chooses the subsets of observed and corrupted components, arbitrarily. In this paper we give the first efficient construction of a code family that provides perfect secrecy in this model, and achieves the secrecy capacity.
Keywords: channel coding; telecommunication security; wireless channels; AWTP channel model; adversarial wiretap channel model; code family; codeword; communication channel; secrecy capacity; Computational modeling; Decoding; Encoding; Reed-Solomon codes; Reliability; Security; Vectors (ID#: 15-4895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970788&isnumber=6970773
Son Hoang Dau; Wentu Song; Chau Yuen, "On Block Security of Regenerating Codes at the MBR Point for Distributed Storage Systems," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1967, 1971, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875177
Abstract: A passive adversary can eavesdrop stored content or downloaded content of some storage nodes, in order to learn illegally about the file stored across a distributed storage system (DSS). Previous work in the literature focuses on code constructions that trade storage capacity for perfect security. In other words, by decreasing the amount of original data that it can store, the system can guarantee that the adversary, which eavesdrops up to a certain number of storage nodes, obtains no information (in Shannon's sense) about the original data. In this work we introduce the concept of block security for DSS and investigate minimum bandwidth regenerating (MBR) codes that are block secure against adversaries of varied eavesdropping strengths. Such MBR codes guarantee that no information about any group of original data units up to a certain size is revealed, without sacrificing the storage capacity of the system. The size of such secure groups varies according to the number of nodes that the adversary can eavesdrop. We show that code constructions based on Cauchy matrices provide block security. The opposite conclusion is drawn for codes based on Vandermonde matrices.
Keywords: codes; distributed processing; matrix algebra; security of data; storage management; Cauchy matrices; DSS; MBR codes; MBR point; Vandermonde matrices; block security; code constructions; distributed storage systems; minimum bandwidth regenerating codes; passive adversary; storage capacity; Decision support systems; Degradation; Encoding; Maintenance engineering; Network coding; Security (ID#: 15-4896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875177&isnumber=6874773
Jinlong Lu; Harshan, J.; Oggier, F., "A USRP Implementation of Wiretap Lattice Codes," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 316, 320, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970845
Abstract: A wiretap channel models a communication channel between a legitimate sender Alice and a legitimate receiver Bob in the presence of an eavesdropper Eve. Confidentiality between Alice and Bob is obtained using wiretap codes, which exploit the difference between the channels to Bob and to Eve. This paper discusses a first implementation of wiretap lattice codes using USRP (Universal Software Radio Peripheral), which focuses on the channel between Alice and Eve. Benefits of coset encoding for Eve's confusion are observed, using different lattice codes in small dimensions, and varying the position of the eavesdropper.
Keywords: channel coding; software radio; telecommunication security; USRP implementation; communication channel; coset encoding; eavesdropper; universal software radio peripheral; wiretap channel models; wiretap lattice codes; Baseband; Decoding; Encoding; Lattices; Receivers; Security; Signal to noise ratio (ID#: 15-4897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970845&isnumber=6970773
Jinjing Jiang; Marukala, N.; Tie Liu, "Symmetrical Multilevel Diversity Coding and Subset Entropy Inequalities," Information Theory, IEEE Transactions on, vol. 60, no. 1, pp. 84,103, Jan. 2014. doi:10.1109/TIT.2013.2288263
Abstract: Symmetrical multilevel diversity coding (SMDC) is a classical model for coding over distributed storage. In this setting, a simple separate encoding strategy known as superposition coding was shown to be optimal in terms of achieving the minimum sum rate and the entire admissible rate region of the problem. The proofs utilized carefully constructed induction arguments, for which the classical subset entropy inequality played a key role. This paper consists of two parts. In the first part, the existing optimality proofs for classical SMDC are revisited, with a focus on their connections to subset entropy inequalities. Initially, a new sliding-window subset entropy inequality is introduced and then used to establish the optimality of superposition coding for achieving the minimum sum rate under a weaker source-reconstruction requirement. Finally, a subset entropy inequality recently proved by Madiman and Tetali is used to develop a new structural understanding of the work of Yeung and Zhang on the optimality of superposition coding for achieving the entire admissible rate region. Building on the connections between classical SMDC and the subset entropy inequalities developed in the first part, in the second part the optimality of superposition coding is extended to the cases where there is either an additional all-access encoder or an additional secrecy constraint.
Keywords: codecs; encoding; entropy codes; SMDC; admissible rate region; all-access encoder; distributed storage; encoding strategy; secrecy constraint; sliding-window subset entropy inequality; source-reconstruction requirement; subset entropy inequalities sum rate; superposition coding; symmetrical multilevel diversity coding; Clocks; Decoding; Electronic mail; Encoding; Entropy; Indexes; Tin; Distributed storage; information-theoretic security; multilevel diversity coding; subset entropy inequality (ID#: 15-4898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651781&isnumber=6690264
Chen, Yanling; Koyluoglu, O.Ozan; Sezgin, Aydin, "On the Achievable Individual-Secrecy Rate Region for Broadcast Channels with Receiver Side Information," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 26, 30, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874788
Abstract: In this paper, we study the problem of secure communication over the broadcast channel with receiver side information, under the lens of individual secrecy constraints (i.e., information leakage from each message to an eavesdropper is made vanishing). Several coding schemes are proposed by extending known results in broadcast channels to this secrecy setting. In particular, individual secrecy provided via one-time pad signal is utilized in the coding schemes. As a result, we obtain an achievable rate region together with a characterization of the capacity region for special cases of either a weak or strong eavesdropper (compared to both legitimate receivers). Interestingly, the capacity region for the former corresponds to a line and the latter corresponds to a square with missing corners; a phenomenon occurring due to the coupling between user's rates. At the expense of having a weaker notion of security, positive secure transmission rates are always guaranteed, unlike the case of the joint secrecy constraint.
Keywords: (not provided) (ID#: 15-4899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874788&isnumber=6874773
Tao Ye; Veitch, D.; Johnson, S., "RA-Inspired Codes for Efficient Information Theoretic Multi-Path Network Security," Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp.408, 412, 26-29 Oct. 2014. doi: (not provided)
Abstract: Mobile devices have multiple network interfaces, some of which have security weaknesses, yet are used for sensitive data despite the risk of eavesdropping. We describe a data-splitting approach which, by design, maps exactly to a wiretap channel, thereby offering information theoretic security. Being based on the deletion channel, it perfectly hides block boundaries from the eavesdropper, which enhances security further. We provide an efficient Repeat Accumulate inspired code design, which satisfies the security criterion, and explore its security rate as a function block size and other parameters.
Keywords: codes; information theory; security of data; telecommunication security; RA-inspired codes; data-splitting approach; deletion channel; eavesdropper; eavesdropping; function block size; information theoretic multipath network security; mobile devices; multiple network interfaces; repeat accumulate inspired code design; security criterion; security rate; security weaknesses; sensitive data; wiretap channel; Australia; Decoding; Encoding; Generators; Parity check codes; Security; Vectors (ID#: 15-4900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979875&isnumber=6979787
Li-Chia Choo; Cong Ling, "Superposition Lattice Coding for Gaussian Broadcast Channel with Confidential Message," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 311, 315, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970844
Abstract: In this paper, we propose superposition coding based on the lattice Gaussian distribution to achieve strong secrecy over the Gaussian broadcast channel with one confidential message, with a constant gap to the secrecy capacity (only for the confidential message). The proposed superposition lattice code consists of a lattice Gaussian code for the Gaussian noise and a wiretap lattice code with strong secrecy. The flatness factor is used to analyze the error probability, information leakage and achievable rates. By removing the secrecy coding, we can modify our scheme to achieve the capacity of the Gaussian broadcast channel with one common and one private message without the secrecy constraint.
Keywords: Gaussian channels; broadcast channels; channel coding; error statistics; lattice theory; telecommunication security; Gaussian broadcast channel; Gaussian noise; achievable rates; confidential message; constant gap; error probability analysis; flatness factor; information leakage; lattice Gaussian code; lattice Gaussian distribution; secrecy capacity; superposition lattice coding; wiretap lattice code; Decoding; Encoding; Error probability; Gaussian distribution; Lattices; Noise; Vectors (ID#: 15-4901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970844&isnumber=6970773
Fan Cheng, "Optimality of Routing on the Wiretap Network with Simple Network Topology," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 786, 790, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874940
Abstract: In this paper, we study the performance of routing in the Level-I/II (n1, n2) wiretap networks, consisting of a source node, a destination node, and an intermediate node. The intermediate node connects the source and the destination nodes via a set of noiseless parallel channels, with sizes n1 and n2, respectively. The information in the network may be eavesdropped by a wiretapper, who can access at most one set of channels, called a wiretap set. All the possible wiretap sets which may be accessed by the wiretapper form a wiretap pattern. A random key K is used to protect the message M. We define two decoding levels: in Level-I, only M is decoded and in Level-II, both M and K are decoded. The objective is to minimize H(K)/H(M) under perfect secrecy constraint. Our concern is whether routing is optimal in this simple network. By harnessing the power of Shannon-type inequalities, we enumerate all the wiretap patterns in the Level-I/II (3, 3) networks, and find out that gaps exist between the bounds by routing and the bounds by Shannon-type inequalities for a small fraction of all the wiretap patterns. Furthermore, we show that for some wiretap patterns, the Shannon bounds can be achieved by a linear code; i.e, routing is not optimal even in the (3, 3) case. Some subtle issues on the network models are discussed and interesting open problems are introduced.
Keywords: linear codes; network coding; telecommunication network routing; telecommunication network topology; telecommunication security; Shannon-type inequalities; destination node; eavesdropped; intermediate node; linear code; network topology; noiseless parallel channels; source node; wiretap network; wiretap pattern; wiretap set; wiretapper; Channel coding; Decoding; Network coding; Random variables; Routing (ID#: 15-4902)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874940&isnumber=6874773
Carlet, C.; Freibert, F.; Guilley, S.; Kiermaier, M.; Jon-Lark Kim; Solé, P., "Higher-Order CIS Codes," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5283, 5295, Sept. 2014. doi:10.1109/TIT.2014.2332468
Abstract: We introduce complementary information set codes of higher order. A binary linear code of length tk and dimension k is called a complementary information set code of order t (t-CIS code for short) if it has t pairwise disjoint information sets. The duals of such codes permit to reduce the cost of masking cryptographic algorithms against side-channel attacks. As in the case of codes for error correction, given the length and the dimension of a t-CIS code, we look for the highest possible minimum distance. In this paper, this new class of codes is investigated. The existence of good long CIS codes of order 3 is derived by a counting argument. General constructions based on cyclic and quasi-cyclic codes and on the building up construction are given. A formula similar to a mass formula is given. A classification of 3-CIS codes of length ≤ 12 is given. Nonlinear codes better than linear codes are derived by taking binary images of Z4-codes. A general algorithm based on Edmonds' basis packing algorithm from matroid theory is developed with the following property: given a binary linear code of rate 1/t, it either provides t disjoint information sets or proves that the code is not t-CIS. Using this algorithm, all optimal or best known [tk, k] codes, where t = 3, 4, . . . , 256 and 1≤ k ≤⌊256/t⌋ are shown to be t-CIS for all such k and t, except for t = 3 with k = 44 and t = 4 with k = 37.
Keywords: binary codes; cryptography; cyclic codes; error correction codes; higher order statistics; linear codes; matrix algebra; set theory; 3-CIS code classification; Edmonds basis packing algorithm; Z4-linear code; binary linear code; complementary information set; cost reduction; cryptographic algorithm; error correction codes; higher order CIS codes; masking scheme; matroid theory; pairwise disjoint information sets; quasi-cyclic codes; side channel attacks; Boolean functions; Educational institutions; Linear codes; Partitioning algorithms; Registers; Security; Silicon;( {mathbb Z}_{4}) -linear codes; Boolean functions; Dual distance; quasi-cyclic codes (ID#: 15-4903)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842653&isnumber=6878505
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computing Theory and Composability 2014 |
The sole research article that combined computing theory with composability was presented in April 2014 at the Symposium on Agent Directed Simulation.
Mingxin Zhang, Alexander Verbraeck. “A Composable PRS-Based Agent Meta-Model for Multi-Agent Simulation Using the DEVS Framework.” ADS '14 Proceedings of the 2014 Symposium on Agent Directed Simulation, April 2014, Article No. 1., 8 pages. doi: (not provided)
Abstract: This paper presents a composable cognitive agent meta-model for multi-agent simulation based on the DEVS (Discrete Event System Specification) framework. We describe an attempt to compose a PRS-based cognitive agent by merely combining "plug and play" DEVS components, show how this DEVS-based cognitive agent meta-model is extensible to serve as a higher-level component for M&S of multi-agent systems, and how the agent meta-model components are reusable to ease cognitive agent modelling development. In addition to an overview of our agent meta-model, we also describe the components of the model specification and services in detail. To test the feasibility of our design, we constructed a simulation based on a Rock-Paper-Scissors game scenario. We also give out comparisons between this agent meta-model and other cognitive agent models. Our agent meta-model is novel in terms of both agent and agent components as these are all abstracted using the DEVS formalism. As different implementations of agent model components are based on the same meta-model components, all the developed agent model components can be reused in the development of other agent models which increases the composability of the agent model, and the whole cognitive agent model can be considered as a coupled model in the DEVS model hierarchy which supports multi-hierarchy modelling.
Keywords: DEVS, PRS, agent model, cognitive architecture, composability (ID#: 15-5831)
URL: http://dl.acm.org/citation.cfm?id=2665049.2665050
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computing Theory and Security Metrics, 2014 |
The works cited here combine research into computing theory with research into security metrics. All were presented in 2014.
George Cybenko, Jeff Hughes. “No Free Lunch in Cyber Security.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, vol. no., pp.1, 12. doi:10.1145/2663474.2663475
Abstract: Confidentiality, integrity and availability (CIA) are traditionally considered to be the three core goals of cyber security. By developing probabilistic models of these security goals we show that:
• the CIA goals are actually specific operating points in a continuum of possible mission security requirements;
• component diversity, including certain types of Moving Target Defenses, versus component hardening as security strategies
can be quantitatively evaluated;
• approaches for diversity can be formalized into a rigorous taxonomy.
Such considerations are particularly relevant for so-called Moving Target Defense (MTD approaches that seek to adapt or randomize computer resources in a way to delay or defeat attackers). In particular, we explore tradeoffs between confidentiality and availability in such systems that suggest improvements.
Keywords: availability; confidentiality; diversity; formal models; integrity; moving targets; security metrics (ID#: 15-5796)
URL: http://doi.acm.org/10.1145/2663474.2663475
Benjamin D. Rodes, John C. Knight, Kimberly S. Wasson. “A Security Metric Based on Security Arguments.” WETSoM 2014 Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics, June 2014, vol, no., pp. 66, 72. doi:10.1145/2593868.2593880
Abstract: Software security metrics that facilitate decision making at the enterprise design and operations levels are a topic of active research and debate. These metrics are desirable to support deployment decisions, upgrade decisions, and so on; however, no single metric or set of metrics is known to provide universally effective and appropriate measurements. Instead, engineers must choose, for each software system, what to measure, how and how much to measure, and must be able to justify the rationale for how these measurements are mapped to stakeholder security goals. An assurance argument for security (i.e., a security argument) provides comprehensive documentation of all evidence and rationales for justifying belief in a security claim about a software system. In this work, we motivate the need for security arguments to facilitate meaningful and comprehensive security metrics, and present a novel framework for assessing security arguments to generate and interpret security metrics.
Keywords: assurance case; confidence; security metrics (ID#: 15-5797)
URL: http://doi.acm.org/10.1145/2593868.2593880
Gaofeng Da, Maochao Xu, Shouhuai Xu. “A New Approach to Modeling and Analyzing Security of Networked Systems.” HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 6. doi:10.1145/2600176.2600184
Abstract: Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.
Keywords: security analysis; security metrics; security modeling (ID#: 15-5798)
URL: http://doi.acm.org/10.1145/2600176.2600184
Steven Noel, Sushil Jajodia. “Metrics Suite for Network Attack Graph Analytics.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, vol., no., pp. 5, 8. doi:10.1145/2602087.2602117
Abstract: We describe a suite of metrics for measuring network-wide cyber security risk based on a model of multi-step attack vulnerability (attack graphs). Our metrics are grouped into families, with family-level metrics combined into an overall metric for network vulnerability risk. The Victimization family measures risk in terms of key attributes of risk across all known network vulnerabilities. The Size family is an indication of the relative size of the attack graph. The Containment family measures risk in terms of minimizing vulnerability exposure across protection boundaries. The Topology family measures risk through graph theoretic properties (connectivity, cycles, and depth) of the attack graph. We display these metrics (at the individual, family, and overall levels) in interactive visualizations, showing multiple metrics trends over time.
Keywords: attack graphs; security metrics; topological vulnerability analysis (ID#: 15-5799)
URL: http://doi.acm.org/10.1145/2602087.2602117
Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., “OutMet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis,” Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-5800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725
Desouky, A.F.; Beard, M.D.; Etzkorn, L.H., “A Qualitative Analysis of Code Clones and Object Oriented Runtime Complexity Based on Method Access Points,” Convergence of Technology (I2CT), 2014 International Conference for, vol., no., pp. 1, 5, 6-8 April 2014. doi:10.1109/I2CT.2014.7092292
Abstract: In this paper, we present a new object oriented complexity metric based on runtime method access points. Software engineering metrics have traditionally indicated the level of quality present in a software system. However, the analysis and measurement of quality has long been captured at compile time, rendering useful results, although potentially incomplete, since all source code is considered in metric computation, versus the subset of code that actually executes. In this study, we examine the runtime behavior of our proposed metric on an open source software package, Rhino 1.7R4. We compute and validate our metric by correlating it with code clones and bug data. Code clones are considered to make software more complex and harder to maintain. When cloned, a code fragment with an error quickly transforms into two (or more) errors, both of which can affect the software system in unique ways. Thus a larger number of code clones is generally considered to indicate poorer software quality. For this reason, we consider that clones function as an external quality factor, in addition to bugs, for metric validation.
Keywords: object-oriented programming; program verification; public domain software; security of data; software metrics; software quality; source code (software); Rhino 1.7R4; bug data; code clones; metric computation; metric validation; object oriented runtime complexity; open source software package; qualitative analysis; runtime method access points; software engineering metrics; software quality; source code; Cloning; Complexity theory; Computer bugs; Correlation; Measurement; Runtime; Software; Code Clones; Complexity; Object Behavior; Object Oriented Runtime Metrics; Software Engineering (ID#: 15-5801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092292&isnumber=7092013
Bhuyan, M.H.; Bhattacharyya, D.K.; Kalita, J.K., “Information Metrics for Low-Rate DDoS Attack Detection: A Comparative Evaluation,” Contemporary Computing (IC3), 2014 Seventh International Conference on, vol., no., pp. 80, 84, 7-9 Aug. 2014. doi:10.1109/IC3.2014.6897151
Abstract: Invasion by Distributed Denial of Service (DDoS) is a serious threat to services offered on the Internet. A low-rate DDoS attack allows legitimate network traffic to pass and consumes low bandwidth. So, detection of this type of attacks is very difficult in high speed networks. Information theory is popular because it allows quantifications of the difference between malicious traffic and legitimate traffic based on probability distributions. In this paper, we empirically evaluate several information metrics, namely, Hartley entropy, Shannon entropy, Renyi's entropy and Generalized entropy in their ability to detect low-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic and an appropriate metric facilitates building an effective model to detect low-rate DDoS attacks. We use MIT Lincoln Laboratory and CAIDA DDoS datasets to illustrate the efficiency and effectiveness of each metric for detecting mainly low-rate DDoS attacks.
Keywords: Internet; computer network security; entropy; statistical distributions; CAIDA DDoS dataset; Hartley entropy; Internet; MIT Lincoln Laboratory dataset; Renyi entropy; Shannon entropy; distributed denial-of-service; generalized entropy; information metrics; information theory; low-rate DDoS attack detection; network traffic; probability distributions; Computer crime; Entropy; Floods; Information entropy; Measurement; Probability distribution; Telecommunication traffic; DDoS attack; entropy; information metric; low-rate; network traffic (ID#: 15-5802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897151&isnumber=6897132
Bidi Ying; Makrakis, D., “Protecting Location Privacy with Clustering Anonymization in Vehicular Networks,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 305, 310, April 27 2014 - May 2 2014. doi:10.1109/INFCOMW.2014.6849249
Abstract: Location privacy is an important issue in location-based services. A large number of location cloaking algorithms have been proposed for protecting location privacy of users. However, these algorithms cannot be used in vehicular networks due to constrained vehicular mobility. In this paper, we propose a new method named Protecting Location Privacy with Clustering Anonymization (PLPCA) for location-based services in vehicular networks. This PLPCA algorithm starts with a road network transforming to an edge-cluster graph in order to conceal road information and traffic information, and then provides a cloaking algorithm based on A-anonymity and l-diversity as privacy metrics to further enclose a target vehicle's location. Simulation analysis shows our PLPCA has good performances like the strength of hiding of road information & traffic information.
Keywords: data privacy; graph theory; mobility management (mobile radio); pattern clustering; telecommunication security; vehicular ad hoc networks; PLPCA algorithm; edge-cluster graph; k-anonymity; l-diversity; location based service; location cloaking algorithm; protecting location privacy with clustering anonymization; road information hiding; road network transforming; traffic information hiding; vehicular ad hoc network; vehicular mobility; Clustering algorithms; Conferences; Privacy; Roads; Social network services; Vehicle dynamics; Vehicles; cluster; location privacy; location-based services; vehicular networks (ID#: 15-5803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849249&isnumber=6849127
Ateser, M.; Tanriover, O., “Investigation of the Cobit Framework's Inputoutput Relationships by Using Graph Metrics,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp.1269, 1275, 7-10 Sept. 2014. doi:10.15439/2014F178
Abstract: The information technology (IT) governance initiatives are complex, time consuming and resource intensive. COBIT, (Control Objectives for Information Related Technology), provides an IT governance framework and supporting toolset to help an organization ensure alignment between use of information technology and its business goals. This paper presents an investigation of COBIT processes' and inputs/outputs relationships with graph analysis. Examining the relationships provides a deep understanding of COBIT structure and may guide for IT governance implementation and audit plans and initiatives. Graph metrics are used to identify the most influential/sensitive processes and relative importance for a given context. Hence, the analysis presented provide guidance to decision makers while developing improvement programs, audits and possibly maturity assessments based on COBIT framework.
Keywords: DP management; business data processing; graph theory; COBIT framework inputs-outputs relationships; Control Objectives for Information Related Technology; IT governance framework; business goals; graph analysis; graph metrics; Guidelines; Information technology; Measurement; Monitoring; Organizations; Portfolios (ID#: 15-5804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933164&isnumber=6932982
Bou-Harb, E.; Debbabi, M.; Assi, C., “Behavioral Analytics for Inferring Large-Scale Orchestrated Probing Events,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 506, 511, April 27 2014 – May 2 2014. doi:10.1109/INFCOMW.2014.6849283
Abstract: The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
Keywords: IP networks; Internet; computer network security; data mining; fuzzy set theory; information theory; invasive software; statistical analysis; telecommunication traffic; Internet traffic; coordination pattern; cyber attack; cyber threat intelligence; cyberspace; data mining methods; early cyber attack notification; early cyber attack warning; emergency response teams; fuzzy approaches; information theoretical metrics; large-scale orchestrated probing events; malicious activities; malicious real darknet data; malware traffic; network security analysts; orchestration pattern; routable unallocated IP addresses; signal techniques; statistical techniques; Conferences; IP networks; Internet; Malware; Probes (ID#: 15-5805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849283&isnumber=6849127
Keramati, M.; Keramati, M., “Novel Security Metrics for Ranking Vulnerabilities in Computer Networks,” Telecommunications (IST), 2014 7th International Symposium on, vol., no., pp. 883, 888, 9-11 Sept. 2014. doi:10.1109/ISTEL.2014.7000828
Abstract: By daily increasing appearance of vulnerabilities and various ways of intruding networks, one of the most important fields in network security will be doing network hardening and this can be possible by patching the vulnerabilities. But this action for all vulnerabilities may cause high cost in the network and so, we should try to eliminate only most perilous vulnerabilities of the network. CVSS itself can score vulnerabilities based on amount of damage they incur in the network but the main problem with CVSS is that, it can only score individual vulnerabilities without considering its relationship with other vulnerabilities of the network. So, in order to help fill this gap, in this paper we have defined some Attack graph and CVSS-based security metrics that can help us to prioritize vulnerabilities in the network by measuring the probability of exploiting them and also the amount of damage they will impose on the network. Proposed security metrics are defined by considering interaction between all vulnerabilities of the network. So our method can rank vulnerabilities based on the network they exist in. Results of applying these security metrics on one well-known network example are also shown that can demonstrates effectiveness of our approach.
Keywords: computer network security; matrix algebra; probability; CVSS-based security metrics; common vulnerability scoring system; computer network; intruding network security; probability; ranking vulnerability; Availability; Communication networks; Complexity theory; Computer networks; Educational institutions; Measurement; Security; Attack Graph; CVSS; Exploit; Network hardening; Security Metric; Vulnerability (ID#: 15-5806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000828&isnumber=7000650
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computing Theory and Security Resilience 2014 |
The works cited here combine research into computing theory with research into security resilience. All were presented in 2014.
Praks, P.; Kopustinskas, V., “Monte-Carlo Based Reliability Modelling of a Gas Network Using Graph Theory Approach,” Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol, no., pp.380, 386, 8-12 Sept. 2014. doi:10.1109/ARES.2014.57
Abstract: The aim of the study is to develop a European gas transmission system probabilistic model to analyse in a single computer model, the reliability and capacity constraints of a gas transmission network. We describe our approach to modelling the reliability and capacity constraints of networks elements, for example gas storages and compressor stations by a multi-state system. The paper presents our experience with the computer implementation of a gas transmission network probabilistic prototype model based on generalization of the maximum flow problem for a stochastic-flow network in which elements can randomly fail with known failure probabilities. The paper includes a test-case benchmark study, which is based on a real gas transmission network. Monte-Carlo simulations are used for estimating the probability that less than the demanded volume of the commodity (for example, gas) is available in the selected network nodes. Simulated results are presented and analysed in depth by statistical methods.
Keywords: Monte Carlo methods; compressors; gas industry; graph theory; probability; reliability; stochastic processes; European gas transmission system probabilistic model ;Monte-Carlo based reliability modelling; Monte-Carlo simulations; capacity constraints; compressor stations; computer model; gas network; gas storages; gas transmission network probabilistic prototype model; graph theory approach; known failure probabilities; maximum flow problem; multistate system; network elements; network nodes; probability estimation; reliability constraints; statistical methods; stochastic-flow network; test-case benchmark study; Computational modeling; Computer network reliability; Liquefied natural gas; Monte Carlo methods; Pipelines; Probabilistic logic; Reliability; Monte-Carlo methods; gas transmission network modelling; network reliability; network resilience (ID#: 15-5807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980306&isnumber=6980232
T. Stepanova, D. Zegzhda. “Applying Large-scale Adaptive Graphs to Modeling Internet of Things Security.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 479. doi:10.1145/2659651.2659696
Abstract: Lots of upcoming IT trends are based on the concept of heterogeneous networks: Internet of Things is amongst them. Modern heterogeneous networks are characterized by hardly predictable behavior, hundreds of parameters of network nodes and connections and lack of single basis for development of control methods and algorithms. To overcome listed problems one need to implement topological modeling of dynamically changing structures. In this paper authors propose basic theoretical framework that will allow estimation of controllability, resiliency, scalability and other determinant parameters of complex heterogeneous networks.
Keywords: internet of things, large-scale adaptive graph, security modeling, sustainability (ID#: 15-5808)
URL: http://doi.acm.org/10.1145/2659651.2659696
Xing Chen, Wei Yu, David Griffith, Nada Golmie, Guobin Xu. “On Cascading Failures and Countermeasures Based on Energy Storage in the Smart Grid.” RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, October 2014, Pages 291-296. doi:10.1145/2663761.2663770
Abstract: Recently, there have been growing concerns about electric power grid security and resilience. The performance of the power grid may suffer from component failures or targeted attacks. A sophisticated adversary may target critical components in the grid, leading to cascading failures and large blackouts. To this end, this paper begins with identifying the most critical components that lead to cascading failures in the grid and then presents a defensive mechanism using energy storage to defend against cascading failures. Based on the optimal power flow control on the standard IEEE power system test cases, we systematically assess component significance, simulate attacks against power grid components, and evaluate the consequences. We also conduct extensive simulations to investigate the effectiveness of deploying Energy Storage Systems (ESSs), in terms of storage capacity and deployment locations, to mitigate cascading failures. Through extensive simulations, our data shows that integrating energy storage systems into the smart grid can efficiently mitigate cascading failures.
Keywords: cascading failure, cascading mitigation, energy storage, smart grid (ID#: 15-5809)
URL: http://doi.acm.org/10.1145/2663761.2663770
Gokce Gorbil, Omer H. Abdelrahman, Erol Gelenbe. “Storms in Mobile Networks.” Q2SWinet '14 Proceedings of the 10th ACM Symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 119-126. doi:10.1145/2642687.2642688
Abstract: Mobile networks are vulnerable to signalling attacks and storms caused by traffic that overloads the control plane through excessive signalling, which can be introduced via malware and mobile botnets. With the advent of machine-to-machine (M2M) communications over mobile networks, the potential for signalling storms increases due to the normally periodic nature of M2M traffic and the sheer number of communicating nodes. Several mobile network operators have also experienced signalling storms due to poorly designed applications that result in service outage. The radio resource control (RRC) protocol is particularly susceptible to such attacks, motivating this work within the EU FP7 NEMESYS project which presents simulations that clarify the temporal dynamics of user behavior and signalling, allowing us to suggest how such attacks can be detected and mitigated.
Keywords: 3G to 5G, malware, network attacks, network simulation, performance analysis, signalling storms, umts networks (ID#: 15-5810)
URL: http://doi.acm.org/10.1145/2642687.2642688
Lina Perelman, Saurabh Amin. “A Network Interdiction Model for Analyzing the Vulnerability of Water Distribution Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 135-144. doi:10.1145/2566468.2566480
Abstract: This article presents a network interdiction model to assess the vulnerabilities of a class of physical flow networks. A flow network is modeled by a potential function defined over the nodes and a flow function defined over arcs (links). In particular, the difference in potential function between two nodes is characterized by a nonlinear flux function of the flow on link between the two nodes. To assess the vulnerability of the network to adversarial attack, the problem is formulated as an attacker-defender network interdiction model. The attacker's objective is to interdict the most valuable links of the network given his resource constraints. The defender's objective is to minimize power loss and the unmet demand in the network. A bi-level approach is explored to identify most critical links for network interdiction. The applicability of the proposed approach is demonstrated on a reference water distribution network, and its utility toward developing mitigation plans is discussed.
Keywords: cyber-physical systems, network flow analysis, network interdiction, vulnerability assessment, water distribution systems (ID#: 15-5811)
URL: http://doi.acm.org/10.1145/2566468.2566480
Radoslav Ivanov, Miroslav Pajic, Insup Lee. “Resilient Multidimensional Sensor Fusion Using Measurement History.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 1-10. doi:10.1145/2566468.2566475
Abstract: This work considers the problem of performing resilient sensor fusion using past sensor measurements. In particular, we consider a system with n sensors measuring the same physical variable where some sensors might be attacked or faulty. We consider a setup in which each sensor provides the controller with a set of possible values for the true value. Here, more precise sensors provide smaller sets. Since a lot of modern sensors provide multidimensional measurements (e.g. position in three dimensions), the sets considered in this work are multidimensional polyhedra. Given the assumption that some sensors can be attacked or faulty, the paper provides a sensor fusion algorithm that obtains a fusion polyhedron which is guaranteed to contain the true value and is minimal in size. A bound on the volume of the fusion polyhedron is also proved based on the number of faulty or attacked sensors. In addition, we incorporate system dynamics in order to utilize past measurements and further reduce the size of the fusion polyhedron. We describe several ways of mapping previous measurements to current time and compare them, under different assumptions, using the volume of the fusion polyhedron. Finally, we illustrate the implementation of the best of these methods and show its effectiveness using a case study with sensor values from a real robot.
Keywords: cps security, fault-tolerance, fault-tolerant algorithms, sensor fusion (ID#: 15-5812)
URL: http://doi.acm.org/10.1145/2566468.2566475
Marina Krotofil, Alvaro A. Cárdenas, Bradley Manning, Jason Larsen. “CPS: Driving Cyber-Physical Systems to Unsafe Operating Conditions by Timing DoS Attacks on Sensor Signals.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 146-155. doi:10.1145/2664243.2664290
Abstract: DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state. Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.
Keywords: CUSUM, DoS attacks, Tennessee eastman process, cyber-physical systems, optimal stopping problems (ID#: 15-5813)
URL: http://doi.acm.org/10.1145/2664243.2664290
Ran Gelles, Amit Sahai, Akshay Wadia. “Private Interactive Communication Across an Adversarial Channel.” ITCS '14 Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, January 2014, Pages 135-144. doi:10.1145/2554797.2554812
Abstract: Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors. We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors. The same impossibility holds also for sub-constant noise rate, e.g., when c is exponentially small (as a function of the input size).
Keywords: adversarial noise, coding, information-theoretic security, interactive communication, private function evaluation (ID#: 15-5814)
URL: http://doi.acm.org/10.1145/2554797.2554812
Saleh Soltan, Dorian Mazauric, Gil Zussman. “Cascading Failures in Power Grids: Analysis and Algorithms.” e-Energy '14 Proceedings of the 5th International Conference on Future Energy Systems, June 2014, Pages 195-206. doi:10.1145/2602044.2602066
Abstract: This paper focuses on cascading line failures in the transmission system of the power grid. Recent large-scale power outages demonstrated the limitations of percolation- and epidemic-based tools in modeling cascades. Hence, we study cascades by using computational tools and a linearized power flow model. We first obtain results regarding the Moore-Penrose pseudo-inverse of the power grid admittance matrix. Based on these results, we study the impact of a single line failure on the flows on other lines. We also illustrate via simulation the impact of the distance and resistance distance on the flow increase following a failure, and discuss the difference from the epidemic models. We use the pseudo-inverse of admittance matrix to develop an efficient algorithm to identify the cascading failure evolution, which can be a building block for cascade mitigation. Finally, we show that finding the set of lines whose removal results in the minimum yield (the fraction of demand satisfied after the cascade) is NP-Hard and introduce a simple heuristic for finding such a set. Overall, the results demonstrate that using the resistance distance and the pseudo-inverse of admittance matrix provides important insights and can support the development of efficient algorithms.
Keywords: algorithms, cascading failures, power grid, pseudo-inverse (ID#: 15-5815)
URL: http://doi.acm.org/10.1145/2602044.2602066
Mahdi Zamani, Mahnush Movahedi. “Secure Location Sharing.” FOMC '14, Proceedings of the 10th ACM International Workshop on Foundations of Mobile Computing, August 2014, Pages 1-10. doi:10.1145/2634274.2634281
Abstract: In the last decade, the number of location-aware mobile devices has mushroomed. Just as location-based services grow fast, they lay out many questions and challenges when it comes to privacy. For example, who owns the location data and for what purpose is the data used? To answer these questions, we need new tools for location privacy. In this paper, we focus on the problem of secure location sharing, where a group of n clients want to collaborate with each other to anonymously share their location data with a location database server and execute queries based on them. To become more realistic, we assume up to a certain fraction of the clients are controlled arbitrarily by an active and computationally unbounded adversary. A relaxed version of this problem has already been studied in the literature assuming either a trusted third party or a weaker adversarial model. We alternatively propose a scalable fully-decentralized protocol for secure location sharing that tolerates up to n/6 statically-chosen malicious clients and does not require any trusted third party. We show that, unlike most other location-based services, our protocol is secure against traffic-analysis attacks. We also show that our protocol requires each client to send a polylogarithmic number of bits and compute a polylogarithmic number of operations (with respect to n) to query a point of interest based on its location.
Keywords: distributed algorithms, fault-tolerance, location-based services (ID#: 15-5816)
URL: http://doi.acm.org/10.1145/2634274.2634281
Zain Shamsi, Ankur Nandwani, Derek Leonard, Dmitri Loguinov. “Hershel: Single-Packet OS Fingerprinting.” ACM SIGMETRICS Performance Evaluation Review - Performance Evaluation Review, Volume 42, Issue 1, June 2014, Pages 195-206. doi:10.1145/2637364.2591972
Abstract: Traditional TCP/IP fingerprinting tools (e.g., nmap) are poorly suited for Internet-wide use due to the large amount of traffic and intrusive nature of the probes. This can be overcome by approaches that rely on a single SYN packet to elicit a vector of features from the remote server; however, these methods face difficult classification problems due to the high volatility of the features and severely limited amounts of information contained therein. Since these techniques have not been studied before, we first pioneer stochastic theory of single-packet OS fingerprinting, build a database of 116 OSes, design a classifier based on our models, evaluate its accuracy in simulations, and then perform OS classification of 37.8M hosts from an Internet-wide scan.
Keywords: internet measurement, os classification, os fingerprinting (ID#: 15-5817)
URL: http://doi.acm.org/10.1145/2637364.2591972
Heath J. LeBlanc, Firas Hassan. “Resilient Distributed Parameter Estimation in Heterogeneous Time-Varying Networks.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 19-28. doi:10.1145/2566468.2566476
Abstract: In this paper, we study a lightweight algorithm for distributed parameter estimation in a heterogeneous network in the presence of adversary nodes. All nodes interact under a local broadcast model of communication in a time-varying network comprised of many inexpensive normal nodes, along with several more expensive, reliable nodes. Either the normal or reliable nodes may be tampered with and overtaken by an adversary, thus becoming an adversary node. The reliable nodes have an accurate estimate of their true parameters, whereas the inexpensive normal nodes communicate and take difference measurements with neighbors in the network in order to better estimate their parameters. The normal nodes are unsure, a priori, about which of their neighbors are normal, reliable, or adversary nodes. However, by sharing information on their local estimates with neighbors, we prove that the resilient iterative distributed estimation (RIDE) algorithm, which utilizes redundancy by removing extreme information, is able to drive the local estimates to their true parameters as long as each normal node is able to interact with a sufficient number of reliable nodes often enough and is not directly influenced by too many adversary nodes.
Keywords: adversary, clock synchronization, distributed algorithm, distributed parameter estimation, localization, resilient systems (ID#: 15-5818)
URL: http://doi.acm.org/10.1145/2566468.2566476
Benoît Libert, Marc Joye, Moti Yung. “Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 303-312. doi:10.1145/2611462.2611498
Abstract: Threshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into n shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1 ≤ n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to t corrupted servers; i.e., it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical fully distributed (the private key is born distributed), non-interactive schemes — where the servers can compute their partial signatures without communication with other servers — with adaptive security (i.e., the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1), where certain solutions incur O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems). To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born — although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player).
Keywords: adaptive security, availability, distributed key generation, efficiency, erasure-free schemes, fault tolerance, fully distributed systems, non-interactivity, threshold signature schemes (ID#: 15-5819)
URL: http://doi.acm.org/10.1145/2611462.2611498
Nathaniel Husted, Steven Myers. “Emergent Properties & Security: The Complexity of Security as a Science.” NSPW '14 Proceedings of the 2014 workshop on New Security Paradigms Workshop, September 2014, Pages 1-14. doi:10.1145/2683467.2683468
Abstract: The notion of emergent properties is becoming common place in the physical and social sciences, with applications in physics, chemistry, biology, medicine, economics, and sociology. Unfortunately, little attention has been given to the discussion of emergence in the realm of computer security, from either the attack or defense perspectives, despite there being examples of such attacks and defenses. We review the concept of emergence, discuss it in the context of computer security, argue that understanding such concepts is essential for securing our current and future systems, give examples of current attacks and defenses that make use of such concepts, and discuss the tools currently available to understand this field. We conclude by arguing that more focus needs to be given to the emergent perspective in information security, especially as we move forward to the Internet of Things and a world full of cyber-physical systems, as we believe many future attacks will make use of such ideas and defenses will require such insights.
Keywords: complex systems, information security, ubiquitous computing (ID#: 15-5820)
URL: http://doi.acm.org/10.1145/2683467.2683468
Minzhe Guo, Prabir Bhattacharya. “Diverse Virtual Replicas for Improving Intrusion Tolerance in Cloud.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 41-44. doi:10.1145/2602087.2602116
Abstract: Intrusion tolerance is important for services in cloud to continue functioning while under attack. Byzantine fault-tolerant replication is considered a fundamental component of intrusion tolerant systems. However, the monoculture of replicas can render the theoretical properties of Byzantine fault-tolerant system ineffective, even when proactive recovery techniques are employed. This paper exploits the design diversity available from off-the-shelf operating system products and studies how to diversify the configurations of virtual replicas for improving the resilience of the service in the presence of attacks. A game-theoretic model is proposed for studying the optimal diversification strategy for the system defender and an efficient algorithm is designed to approximate the optimal defense strategies in large games.
Keywords: diversity, intrusion tolerance, virtual replica (ID#: 15-5821)
URL: http://doi.acm.org/10.1145/2602087.2602116
Stjepan Picek, Bariş Ege, Lejla Batina, Domagoj Jakobovic, Łukasz Chmielewski, Marin Golub. “On Using Genetic Algorithms for Intrinsic Side-Channel Resistance: The Case of AES S-Box.” CS2 '14 Proceedings of the First Workshop on Cryptography and Security in Computing Systems, January 2014, Pages 13-18. doi:10.1145/2556315.2556319
Abstract: Finding balanced S-boxes with high nonlinearity and low transparency order is a difficult problem. The property of transparency order is important since it specifies the resilience of an S-box against differential power analysis. Better values for transparency order and hence improved side-channel security often imply less in terms of nonlinearity. Therefore, it is impossible to find an S-box with all optimal values. Currently, there are no algebraic procedures that can give the preferred and complete set of properties for an S-box. In this paper, we employ evolutionary algorithms to find S-boxes with desired cryptographic properties. Specifically, we conduct experiments for the 8×8 S-box case as used in the AES standard. The results of our experiments proved the feasibility of finding S-boxes with the desired properties in the case of AES. In addition, we show preliminary results of side-channel experiments on different versions of "improved" S-boxes.
Keywords: S-box, block ciphers, genetic algorithms, side-channel analysis, transparency order (ID#: 15-5822)
URL: http://doi.acm.org/10.1145/2556315.2556319
Tua A. Tamba, M. D. Lemmon. “Forecasting the Resilience of Networked Dynamical Systems under Environmental Perturbation.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 61-62. doi:10.1145/2566468.2576848
Abstract: (not provided)
Keywords: distance-to-bifurcation, resilience, sum-of-square relaxation (ID#: 15-5823)
URL: http://doi.acm.org/10.1145/2566468.2576848
Marica Amadeo, Claudia Campolo, Antonella Molinaro. “Multi-source Data Retrieval in IoT via Named Data Networking.” ICN '14 Proceedings of the 1st International Conference on Information-Centric Networking, September 2014, Pages 67-76. doi:10.1145/2660129.2660148
Abstract: The new era of Internet of Things (IoT) is driving the revolution in computing and communication technologies spanning every aspect of our lives. Thanks to its innovative concepts, such as named content, name-based routing and in-network caching, Named Data Networking (NDN) appears as a key enabling paradigm for IoT. Despite its potential, the support of IoT applications often requires some modifications in the NDN engine for a more efficient and effective exchange of packets. In this paper, we propose a baseline NDN framework for the support of reliable retrieval of data from different wireless producers which can answer to the same Interest packet (e.g., a monitoring application collecting environmental data from sensors in a target area). The solution is evaluated through simulations in ndnSIM and achieved results show that, by leveraging the concept of exclude field and ad hoc defined schemes for Data suppression and collision avoidance, it leads to improved performance in terms of data collection time and network overhead.
Keywords: data retrieval, internet of things, named data networking, naming, transport (ID#: 15-5824)
URL: http://doi.acm.org/10.1145/2660129.2660148
Yibo Zhu, Xia Zhou, Zengbin Zhang, Lin Zhou, Amin Vahdat, Ben Y. Zhao, Haitao Zheng. “Cutting the Cord: A Robust Wireless Facilities Network for Data Centers.” MobiCom '14 Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, September 2014, Pages 581-592. doi:10.1145/2639108.2639140
Abstract: Today's network control and management traffic are limited by their reliance on existing data networks. Fate sharing in this context is highly undesirable, since control traffic has very different availability and traffic delivery requirements. In this paper, we explore the feasibility of building a dedicated wireless facilities network for data centers. We propose Angora, a low-latency facilities network using low-cost, 60GHz beamforming radios that provides robust paths decoupled from the wired network, and flexibility to adapt to workloads and network dynamics. We describe our solutions to address challenges in link coordination, link interference and network failures. Our testbed measurements and simulation results show that Angora enables large number of low-latency control paths to run concurrently, while providing low latency end-to-end message delivery with high tolerance for radio and rack failures.
Keywords: 60ghz wireless, data centers, wireless beamforming (ID#: 15-5825)
URL: http://doi.acm.org/10.1145/2639108.2639140
Anupam Das, Nikita Borisov, Prateek Mittal, Matthew Caesar. “Re3: Relay Reliability Reputation for Anonymity Systems.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 63-74. doi:10.1145/2590296.2590338
Abstract: To conceal user identities, Tor, a popular anonymity system, forwards traffic through multiple relays. These relays, however, are often unreliable, leading to a degraded user experience. Worse yet, malicious relays may strategically introduce deliberate failures to increase their chance of compromising anonymity. In this paper we propose a reputation system that profiles the reliability of relays in an anonymity system based on users' past experience. A particular challenge is that an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. This enables an attack where malicious relays can target a set of honest relays in order to drive down their reputation. Our system defends against this attack in two ways. Firstly, we use an adaptive exponentially-weighted moving average (EWMA) that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time. Secondly, we propose a filtering scheme based on the evaluated reputation score that can effectively discard relays involved in such attacks. We use probabilistic analysis, simulations, and real-world experiments to validate our reputation system. We show that the dominant strategy for an attacker is to not perform deliberate failures, but rather maintain a high quality of service. Our reputation system also significantly improves the reliability of path construction even in the absence of attacks. Finally, we show that the benefits of our reputation system can be realized with a moderate number of observations, making it feasible for individual clients to perform their own profiling, rather than relying on an external entity.
Keywords: DOS attack, anonymity, reputation systems, tor network (ID#: 15-5826)
URL: http://doi.acm.org/10.1145/2590296.2590338
Paulo Casanova, David Garlan, Bradley Schmerl, Rui Abreu. “Diagnosing Unobserved Components in Self-Adaptive Systems.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 75-84. doi:10.1145/2593929.2593946
Abstract: Availability is an increasingly important quality for today's software-based systems and it has been successfully addressed by the use of closed-loop control systems in self-adaptive systems. Probes are inserted into a running system to obtain information and the information is fed to a controller that, through provided interfaces, acts on the system to alter its behavior. When a failure is detected, pinpointing the source of the failure is a critical step for a repair action. However, information obtained from a running system is commonly incomplete due to probing costs or unavailability of probes. In this paper we address the problem of fault localization in the presence of incomplete system monitoring. We may not be able to directly observe a component but we may be able to infer its health state. We provide formal criteria to determine when health states of unobservable components can be inferred and establish formal theoretical bounds for accuracy when using any spectrum-based fault localization algorithm.
Keywords: Diagnostics, Monitoring, Self-adaptive systems (ID#: 15-5827)
URL: http://doi.acm.org/10.1145/2593929.2593946
Michael Backes, Fabian Bendun, Ashish Choudhury, Aniket Kate. “Asynchronous MPC with a Strict Honest Majority Using Non-Equivocation.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 10-19. doi:10.1145/2611462.2611490
Abstract: Multiparty computation (MPC) among n parties can tolerate up to t < n/2 active corruptions in a synchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t < n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation. Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allow a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t < n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t < n/2 by Beerliová-Trubíniová, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of Θ(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliová-Trubíniová et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t < n/2, which is of independent interest to threshold cryptography.
Keywords: asynchrony, multiparty computation (MPC), non-equivocation, reduced assumptions, resiliency, verifiable secret sharing (VSS) (ID#: 15-5828)
URL: http://doi.acm.org/10.1145/2611462.2611490
Lannan Luo, Jiang Ming, Dinghao Wu, Peng Liu, Sencun Zhu. “Semantics-Based Obfuscation-Resilient Binary Code Similarity Comparison with Applications to Software Plagiarism Detection.” FSE 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 389-400. doi:10.1145/2635868.2635900
Abstract: Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.
Keywords: Software plagiarism detection, binary code similarity comparison, obfuscation, symbolic execution, theorem proving (ID#: 15-5829)
URL: http://doi.acm.org/10.1145/2635868.2635900
Divesh Aggarwal, Yevgeniy Dodis, Shachar Lovett. “Non-malleable Codes from Additive Combinatorics.” STOC '14, Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 774-783. doi:10.1145/2591796.2591804
Abstract: Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of "tampering functions" F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R individually. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model. Prior to this work, non-malleable codes in the splitstate model received considerable attention in the literature, but were constructed either (1) in the random oracle model [16], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [26], or (3) could only encode 1-bit messages [14]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model. The heart of our construction uses the following new property of the inner-product function ⟨L;R⟩ over the vector space Fnp (for a prime p and large enough dimension n): if L and R are uniformly random over Fnp, and f,g : Fnp → Fnp are two arbitrary functions on L and R, then the joint distribution (⟨L;R⟩, ⟨f(L), g(R)⟩) is "close" to the convex combination of "affine distributions" {(U, aU + b) | a,b € Fp}, where U is uniformly random in Fp. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called Quasi-polynomial Freiman-Ruzsa Theorem which was recently established by Sanders [29] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture.
Keywords: (not provided) (ID#: 15-5830)
URL: http://doi.acm.org/10.1145/2591796.2591804
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Control Theory and Privacy, 2014 Part 1 |
In the Science of Security, control theory offers methods and approaches to potentially solve hard problems. The research work presented here specifically addresses issues in privacy. The work was presented in 2014..
Cox, A.; Roy, S.; Warnick, S., “A Science of System Security,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 487, 492, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039428
Abstract: As the internet becomes the information-technology backbone for more and more operations, including critical infrastructures such as water and power systems, the security problems introduced by linking such operations to the internet become more of a concern. Various communities have considered these problems and approached solutions from a variety of perspectives. In this paper, we consider the contributions we believe control theory can make towards developing tools for analyzing whole system security, that is, security of a system that may include its physical and human elements as well as its cyber components. In particular, we contrast notions of security focused on protecting information, and thus concerned primarily with delivering the right information to the right people (and no one else), with a different perspective on system security focused on protecting system functionality, which is concerned primarily with system robustness to particular attacks (and may not be concerned with privacy of communications).
Keywords: security of data; Internet; control theory; information protection; information technology backbone; security notion; system functionality protection; system security; Communities; Computational modeling; Computer security; Computers; Robustness; US Government (ID#: 15-5739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039428&isnumber=7039338
Srivastava, M., “In Sensors We Trust — A Realistic Possibility?” Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, vol., no., pp. 1, 1, 26-28 May 2014. doi:10.1109/DCOSS.2014.65
Abstract: Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-5740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Nai-Wei Lo; Yohan, A., “Danger Theory-Based Privacy Protection Model for Social Networks,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp. 1397, 1406, 7-10 Sept. 2014. doi:10.15439/2014F129
Abstract: Privacy protection issues in Social Networking Sites (SNS) usually raise from insufficient user privacy control mechanisms offered by service providers, unauthorized usage of user's data by SNS, and lack of appropriate privacy protection schemes for user's data at the SNS servers. In this paper, we propose a privacy protection model based on danger theory concept to provide automatic detection and blocking of sensitive user information revealed in social communications. By utilizing the dynamic adaptability feature of danger theory, we show how a privacy protection model for SNS users can be built with system effectiveness and reasonable computing cost. A prototype based on the proposed model is constructed and evaluated. Our experiment results show that the proposed model achieves 88.9% detection and blocking rate in average for user-sensitive data revealed by the services of SNS.
Keywords: data privacy; social networking (online); SNS; danger theory; dynamic adaptability feature; privacy protection; social communication; social networking sites; user privacy control mechanism; Adaptation models; Cryptography; Data privacy; Databases; Immune system; Privacy; Social network services (ID#: 15-5741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933181&isnumber=6932982
Ward, J.R.; Younis, M., “Examining the Effect of Wireless Sensor Network Synchronization on Base Station Anonymity,” Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 204, 209, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.39
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).
Keywords: protocols; synchronisation; telecommunication network topology; telecommunication security; telecommunication traffic; wireless sensor networks; BS anonymity improvement; RBS; TPSN; WSN topology; base station anonymity; data sources; evidence theory analysis; network traffic flow; reference broadcast synchronization; security mechanisms; timing-synch protocol for sensor networks; traffic analysis techniques; wireless sensor network synchronization; Protocols; Receivers; Sensors; Synchronization; Wireless communication; Wireless sensor networks; RBS; TPSN; anonymity; location privacy; synchronization; wireless sensor network (ID#: 15-5742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956760&isnumber=6956719
Tsegaye, T.; Flowerday, S., “Controls for Protecting Critical Information Infrastructure from Cyberattacks,” Internet Security (WorldCIS), 2014 World Congress on, vol., no., pp. 24, 29, 8-10 Dec. 2014. doi:10.1109/WorldCIS.2014.7028160
Abstract: Critical information infrastructure has enabled organisations to store large amounts of information on their systems and deliver it via networks such as the internet. Users who are connected to the internet are able to access various internet services provided by critical information infrastructure. However, some organisations have not effectively secured their critical information infrastructure and hackers, disgruntled employees and other entities have taken advantage of this by launching cyberattacks on their critical information infrastructure. They do this by using cyberthreats to exploit vulnerabilities in critical information infrastructure which organisations fail to secure. As a result, cyberthreats are able to steal or damage confidential information stored on systems or take down websites, preventing access to information. Despite this, risk strategies can be used to implement a number of security controls: preventive, detective and corrective controls, which together form a system of controls. This will ensure that the confidentiality, integrity and availability of information is preserved, thus reducing risks to information. This system of controls is based on the General Systems Theory, which states that the elements of a system are interdependent and contribute to the operation of the whole system. Finally, a model is proposed to address insecure critical information infrastructure.
Keywords: Internet; business data processing; computer crime; data integrity; data privacy; risk management; Internet service access; confidential information stealing; corrective control; critical information infrastructure protection; cyberattacks; cyberthreats; detective control; disgruntled employees; general systems theory; hackers; information access; information availability; information confidentiality; information integrity; organisational information; preventive control; risk reduction; security controls; vulnerability exploitation; Availability; Computer crime; Malware; Personnel; Planning; Critical Information Infrastructure; Cyberattacks; Cyberthreats; Security Controls; Vulnerabilities (ID#: 15-5743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028160&isnumber=7027983
Hsu, J.; Gaboardi, M.; Haeberlen, A.; Khanna, S.; Narayan, A.; Pierce, B.C.; Roth, A., “Differential Privacy: An Economic Method for Choosing Epsilon,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 398, 410, 19-22 July 2014. doi:10.1109/CSF.2014.35
Abstract: Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
Keywords: data analysis; data privacy; Epsilon; data analyst; differential privacy; differentially private algorithms; economic method; privacy guarantee; Accuracy; Analytical models; Cost function; Data models; Data privacy; Databases; Privacy; Differential Privacy (ID#: 15-5744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957125&isnumber=6957090
Tan, A.Z.Y.; Wen Yong Chua; Chang, K.T.T., “Location Based Services and Information Privacy Concerns among Literate and Semi-literate Users,” System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp. 3198, 3206, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.394
Abstract: Location-based services mobile applications are becoming increasingly prevalent to the large population of semi-literate users living in emerging economies due to the low costs and ubiquity. However, usage of location-based services is still threatened by information privacy concerns. Studies typically only addressed how to mitigate information privacy concerns for the literate users and not the semi-literate users. To fill that gap and better understand information privacy concerns among different communities, this study draws upon theories of perceptual control and familiarity to identify the antecedents of information privacy concerns related to location-based service and user literacy. The proposed research model is empirically tested in a laboratory experiment. The findings show that the two location-based service channels (push and pull) affect the degree of information privacy concerns between the literate and semi-literate users. Implications for enhancing usage intentions and mitigating information privacy concerns for different types of mobile applications are discussed.
Keywords: data privacy; mobile computing; social aspects of automation; emerging economies; information privacy concerns; laboratory experiment; location-based service channels; mobile applications; pull channel; push channel; semiliterate users; usage intentions; user literacy; Analysis of variance; Educational institutions; Mobile communication; Mobile handsets; Privacy; Standards (ID#: 15-5745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758998&isnumber=6758592
Zheng Yan; Xueyun Li; Kantola, R., “Personal Data Access Based on Trust Assessment in Mobile Social Networking,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 989, 994, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.131
Abstract: Trustworthy personal data access control at a semi-trusted or distrusted Cloud Service Provider (CSP) is a practical issue although cloud computing has widely developed. Many existing solutions suffer from high computation and communication costs, and are impractical to deploy in reality due to usability issue. With the rapid growth and popularity of mobile social networking, trust relationships in different contexts can be assessed based on mobile social networking activities, behaviors and experiences. Obviously, such trust cues extracted from social networking are helpful in automatically managing personal data access at the cloud with sound usability. In this paper, we propose a scheme to secure personal data access at CSP according to trust assessed in mobile social networking. Security and performance evaluations show the efficiency and effectiveness of our scheme for practical adoption.
Keywords: authorisation; cloud computing; mobile computing; social networking (online); trusted computing; CSP; cloud computing; cloud service provider; mobile social networking; trust assessment; trustworthy personal data access control; Access control; Complexity theory; Context; Cryptography; Mobile communication; Mobile computing; Social network services; Trust; access control; cloud computing; reputation; social networking; trust assessment (ID#: 15-5746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011357&isnumber=7011202
Ta-Chih Yang; Ming-Huang Guo, “An A-RBAC Mechanism for a Multi-Tenancy Cloud Environment,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934436
Abstract: With the evolution of software technology, companies require more high-performance hardware to enhance their competitiveness. Cloud computing is the result of distributed computing and grid computing processes and is gradually being seen as the solution to the companies. Cloud computing can virtualizes existing software and hardware to reduce costs. Thus, companies only require high Internet bandwidth and devices to access cloud service on the Internet. This would decrease many overhead costs and the number of IT staff required. When many companies rent a cloud service simultaneously, this is called a multi-tenancy cloud service. However, how to access resource safely is important if adopt multi-tenancy cloud computing technology. The cloud computing environment is vulnerable to network-related attacks. This research improves the role-based access control authorization mechanism and combines it with attribute check mechanism to determine which tenant that user can access. The enhanced authorization can improve the safety of cloud computing services and protected the data privacy.
Keywords: authorisation; cloud computing; data privacy; grid computing; A-RBAC mechanism; IT staff; attribute check mechanism; cloud computing; cloud service; data privacy; distributed computing; grid computing processes; high Internet bandwidth; high-performance hardware; multitenancy cloud computing technology; multitenancy cloud environment; network-related attacks; role-based access control authorization mechanism; software technology; Authentication; Authorization; Cloud computing; Companies; Cryptography;Servers; Attribute;Authorization; Multi-tenancy; Role-based access control (ID#: 15-5747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934436&isnumber=6934393
Boyang Zhou; Wen Gao; Shanshan Zhao; Xinjia Lu; Zhong Du; Chunming Wu; Qiang Yang, “Virtual Network Mapping for Multi-Domain Data Plane in Software-Defined Networks,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934439
Abstract: Software-Defined Networking (SDN) separates the control plane from the data plane to improve the control flexibility, supporting multiple services with their isolated physical resources. In SDN, the virtual network (VN) mapping is required by network services for allocating these resources in the multidomain SDN. Such mapping problem is challenged by the NP-Completeness of the mapping and business privacy to protect the domain topology. We propose a novel multi-domain mapping algorithm for SDN using a distributed architecture to achieve a better efficiency and flexibility than the traditional PolyViNE approach, meanwhile protecting the privacy. By simulating on a large synthesized topology with 10 to 40 domains, our approach shows 25% and 15% faster than the PolyViNE in time, and 30% better in balancing load on multiple controllers.
Keywords: computational complexity; computer network security; data protection; resource allocation; telecommunication network topology; virtual private networks; NP-complete; PolyViNE approach; SDN; VN mapping; business privacy; control plane; data plane; distributed architecture; domain topology protection; load balancing; multidomain data plane; multidomain mapping algorithm; resource allocation; software-defined network; virtual network mapping; Bandwidth; Computer architecture; Control systems; Heuristic algorithms; Network topology; Partitioning algorithms; Topology; Network Management; Software-Defined Networking; Virtual Network Mapping (ID#: 15-5748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934439&isnumber=6934393
Kia, S.S.; Cortes, J.; Martinez, S., “Periodic and Event-Triggered Communication for Distributed Continuous-Time Convex Optimization,” American Control Conference (ACC), 2014, vol., no., pp. 5010, 5015, 4-6 June 2014. doi:10.1109/ACC.2014.6859122
Abstract: We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
Keywords: convex programming; directed graphs; network theory (graphs); Zeno behavior; connected undirected graph; convex function; cost functions; distributed continuous-time algorithm; distributed continuous-time convex optimization; event-triggered communication; global cost function; network optimization problem; periodic communication; privacy preservation properties; strongly connected weight-balanced directed graph; synchronous periodic communication scheme; Algorithm design and analysis; Convergence; Convex functions; Cost function; Privacy; Topology; Control of networks; Optimization algorithms (ID#: 15-5749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859122&isnumber=6858556
Tams, B.; Rathgeb, C., “Towards Efficient Privacy-Preserving Two-Stage Identification for Fingerprint-Based Biometric Cryptosystems,” Biometrics (IJCB), 2014 IEEE International Joint Conference on, vol., no., pp. 1, 8, Sept. 29 2014 - Oct. 2 2014. doi:10.1109/BTAS.2014.6996241
Abstract: Biometric template protection schemes in particular, biometric cryptosystems bind secret keys to biometric data, i.e. complex key retrieval processes are performed at each authentication attempt. Focusing on biometric identification exhaustive 1: N comparisons are required for identifying a biometric probe. As a consequence comparison time frequently dominates the overall computational workload, preventing biometric cryptosystems from being operated in identification mode. In this paper we propose a computational efficient two-stage identification system for fingerprint-biometric cryptosystems. Employing the concept of adaptive Bloom filter-based cancelable biometrics, pseudonymous binary prescreeners are extracted based on which top-candidates are returned from a database. Thereby the number of required key-retrieval processes is reduced to a fraction of the total. Experimental evaluations confirm that, by employing the proposed technique, biometric cryptosystems, e.g. fuzzy vault scheme, can be enhanced in order to enable a real-time privacy preserving identification, while at the same time biometric performance is maintained.
Keywords: biometrics (access control); data privacy; data structures; fingerprint identification; fuzzy set theory; image retrieval; private key cryptography; adaptive Bloom filter-based cancelable biometrics; biometric performance analysis; biometric probe identification; biometric template protection schemes; comparison time; complex key retrieval processes; computational efficient two-stage identification system; computational workload; data authentication; fingerprint-based biometric cryptosystems; fuzzy vault scheme; privacy-preserving two-stage identification; pseudonymous binary prescreener extraction; real-time privacy preserving identification; secret keys; Authentication; Cryptography; Databases; Fingerprint recognition; Measurement; Privacy (ID#: 15-5750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996241&isnumber=6996217
Krombi, W.; Erradi, M.; Khoumsi, A., “Automata-Based Approach to Design and Analyze Security Policies,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 306, 313, 23-24 July 2014. doi:10.1109/PST.2014.6890953
Abstract: Information systems must be controlled by security policies to protect them from undue accesses. Security policies are often designed by rules expressed using informal text, which implies ambiguities and inconsistencies in security rules. Our objective in this paper is to develop a formal approach to design and analyze security policies. We propose a procedure that synthesizes an automaton which implements a given security policy. Our automata-based approach can be a common basis to analyze several aspects of security policies. We use our automata-based approach to develop three analysis procedures to: verify completeness of a security policy, detect anomalies in a security policy, and detect functional discrepancies between several implementations of a security policy. We illustrate our approach using examples of security policies for a firewall.
Keywords: automata theory; data protection; firewalls; information systems; anomaly detection; automata synthesis; automata-based approach; firewall security policies; formal approach; functional discrepancy detection; information system protection; security policy analysis; security policy completeness verification; security policy design; Automata; Boolean functions; Data structures; Educational institutions; Firewalls (computing); Protocols (ID#: 15-5751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890953&isnumber=6890911
Anggorojati, B.; Prasad, N.R.; Prasad, R., “Secure Capability-Based Access Control in the M2M Local Cloud Platform,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934469
Abstract: Protection and access control to resources plays a critical role in a distributed computing system like Machine-to-Machine (M2M) and cloud platform. The M2M local cloud platform considered in this paper, consists of multiple distributed M2M gateways that form a local cloud - presenting a unique challenge to the existing access control systems. The most prominent access control systems, such as ACL and RBAC, lack in scalability and flexibility to manage access from users or entity that belong to different authorization domains, and thus unsuitable for the presented platform. The access control approach based on API keys and OAuth that is used by the existing M2M Cloud platform, fails to provide fine grained and flexible access right delegation at the same time when both methods are used together. The proposed approach is built upon capability-based access control that has been specifically designed to provide flexible, yet restricted, access rights delegation. A number of use cases are provided to show the usage of capability creation, delegation, and access provision, particularly in the way application accesses services provided by the platform.
Keywords: application program interfaces; authorisation; cloud computing; computer network security; internetworking; network servers; private key cryptography; API key; M2M local cloud platform; OAuth; application programming interface; authorization domain; distributed computing system; machine-to-machine computing system; multiple distributed M2M gateway; secure capability based access control system; Access control; Buildings; Context; Permission; Privacy; Public key; M2M; access control; capability; cloud; delegation; security (ID#: 15-5752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934469&isnumber=6934393
Lugini, L.; Marasco, E.; Cukic, B.; Dawson, J., “Removing Gender Signature from Fingerprints,” Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, vol., no., pp. 1283, 1287, 26-30 May 2014. doi:10.1109/MIPRO.2014.6859765
Abstract: The need of sharing fingerprint image data in many emerging applications raises concerns about the protection of privacy. It has become possible to use automated algorithms for inferring soft biometrics from fingerprint images. Even if we cannot uniquely match the person to an existing fingerprint, revealing their age or gender may lead to undesirable consequences. Our research is focused on de-identifying fingerprint images in order to obfuscate soft biometrics. In this paper, we first discuss a general framework for soft biometrics fingerprint de-identification. We implemented the framework to reduce the risk of successful estimation of gender from fingerprint images using ad-hoc image filtering. We evaluate the proposed approach through experiments using a data set of rolled fingerprints collected at West Virginia University. Results show the proposed method is effective in preventing gender estimation from fingerprint images.
Keywords: data privacy; filtering theory; fingerprint identification; ad-hoc image filtering; gender estimation prevention; gender signature removal; privacy protection; rolled fingerprints; soft biometrics fingerprint deidentification; Biometrics (access control); Estimation; Feature extraction; Fingerprint recognition; Frequency-domain analysis; Privacy; Probes; Fingerprint Recognition; Gender Estimation; Image De-Identification}, (ID#: 15-5753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859765&isnumber=6859515
Premarathne, U.S.; Khalil, I., “Multiplicative Attributes Graph Approach for Persistent Authentication in Single-Sign-On Mobile Systems,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 221, 228, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.33
Abstract: Single-sign-on (SSO) has been proposed as a more efficient and convenient authentication method. Classic SSO systems re-authenticate a user to different applications based on a fixed set of attributes (e.g. Username-password combinations). However, the use of a fixed set of attributes fail to account for mobility and contextual variations of user activities. Thus, in a SSO based system, robust persistent authentications and secure session termination management are vital for ensuring secure operations. In this paper we propose a novel persistent authentication technique using multiplicative attribute graph model. We use multiple attribute based persistent authentication model using facial biometrics, location and activity specific information. We propose a novel membership (or group affiliations) based session management technique for user initiated SSO global logout management. Significance and viability of these methods are demonstrated by security, complexity and numerical analyses. In conclusion, our model provides meaningful insights and more pragmatic approaches for persistent authentication and session termination management in implementing SSO based mobile collaborative applications.
Keywords: authorisation; biometrics (access control); graph theory; mobile computing; SSO based mobile collaborative applications; SSO global logout management; activity specific information; contextual variations; facial biometrics; location information; membership based session management technique; mobility variations; multiple attribute based persistent authentication model; multiplicative attribute graph approach; robust persistent authentications; secure session termination management; single-sign-on mobile systems; Authentication; Biological system modeling; Biometrics (access control); Collaboration; Face; Mobile communication; mobile systems; multiplicative attribute graph; persistent authentication; single sign on (ID#: 15-5754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011254&isnumber=7011202
Jianming Fu; Yan Lin; Xu Zhang; Pengwei Li, “Computation Integrity Measurement Based on Branch Transfer,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 590, 597, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.75
Abstract: Tasks are selectively migrated to the cloud with the widespread adoption of the cloud computing platform, but the user cannot know whether the tasks are tampered in the cloud, so it is an urgent demand for cloud users to verify the execution integrity of the program in the cloud. The computation integrity measurement based on behavior is difficult to detect carefully crafted shell code. According to the property of shell code, this paper proposes a computation integrity measurement based on branch transfer called CIMB, which is a fine-grained instruction-level integrity measurement. In this approach, all branches in the user-level have been recorded, which effectively cover all execution control flow of a program, and CIMB can detect control-flow hijacking attacks without the support of source code, such as Return-oriented Programming (ROP) and Jump-oriented Programming (JOP). Meanwhile, distance between two instruction addresses and machine code of instruction can mask the measurement inconsistency derived from address space layout randomization of program and shared libraries. Finally, we have implemented CIMB with a dynamic binary instrumentation tool Pin on ×86 32-bit version of ubuntu12.04. Its experimental results show that CIMB is feasible and it has a relatively stable measurement result, and the advantages of CIMB and factors affecting the results of measurement are analyzed and discussed.
Keywords: cloud computing; data integrity; trusted computing; CIMB; Pin dynamic binary instrumentation tool; address space layout randomization; branch transfer; cloud computing platform; cloud users; computation integrity measurement; control-flow hijacking attack detection; fine-grained instruction-level integrity measurement; instruction addresses; instruction machine code; measurement inconsistency; program execution control flow; program execution integrity verification; shellcode detection; tampered tasks; ubuntu12.04; user-level; Complexity theory; Current measurement; Fluid flow measurement; Instruments; Libraries; Linux; Software measurement; computation integrity; control flow; dynamic binary instrumentation; integrity measurement; trusted computing (ID#: 15-5755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011299&isnumber=7011202
Sefer, E.; Kingsford, C., “Diffusion Archaeology for Diffusion Progression History Reconstruction,” Data Mining (ICDM), 2014 IEEE International Conference on, vol., no., pp. 530, 539, 14-17 Dec. 2014. doi:10.1109/ICDM.2014.135
Abstract: Diffusion through graphs can be used to model many real-world process, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring—perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on sub modularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.
Keywords: data handling; diffusion; discrete time systems; graph theory; maximum likelihood estimation; PCDSVC relaxation; contaminant diffusion; continuous monitoring; data access; diffusion archaeology; diffusion history reconstruction; diffusion progression history reconstruction; diffusion state; discrete-time SEIRS-type diffusion model; disease spreader; graph; maximum likelihood; partial diffusion data problem; performance guarantee; prize-collecting dominating-set vertex cover relaxation; real-world diffusion; real-world process; temporal characteristics; Approximation methods; Computational modeling; Computers; History; Integrated circuit modeling; Mathematical model; Silicon; diffusion; epidemics; history (ID#: 15-5756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023370&isnumber=7023305
Wen Zeng; Koutny, M.; Van Moorsel, A., “Performance Modelling and Evaluation of Enterprise Information Security Technologies,” Computer and Information Technology (CIT), 2014 IEEE International Conference on, vol., no., pp. 504, 511, 11-13 Sept. 2014. doi:10.1109/CIT.2014.18
Abstract: By providing effective access control mechanisms, enterprise information security technologies have been proven successful in protecting the confidentiality of sensitive information in business organizations. However, such security mechanisms typically reduce the work productivity of the staff, by making them spend time working on non-project related tasks. Therefore, organizations have to invest a signification amount of capital in the information security technologies, and then to continue incurring additional costs. In this study, we investigate the performance of administrators in an information help desk, and the non-productive time (NPT) in an organization, resulting from the implementation of information security technologies. An approximate analytical solution is discussed first, and the loss of staff member productivity is quantified using non-productive time. Stochastic Petri nets are then used to provide simulation results. The presented study can help information security managers to make investment decisions, and to take actions toward reducing the cost of information security technologies, so that a balance is kept between information security expense, resource drain and effectiveness of security technologies.
Keywords: Petri nets; authorisation; business data processing; cost reduction; data privacy; decision making; investment; productivity; stochastic processes; NPT; access control mechanisms; business organizations; cost reduction enterprise information security technologies; information help desk; investment decision making; nonproductive time; performance evaluation; performance modelling; sensitive information confidentiality; staff member productivity; stochastic Petri nets; work productivity; Information security; Mathematical model; Organizations; Servers; Stochastic processes; Non-productive Time; Queuing Theory; Security Investment Decision; Stochastic Petri Nets (ID#: 15-5757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984703&isnumber=6984594
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Control Theory and Privacy, 2014 Part 2 |
In the Science of Security, control theory offers methods and approaches to potentially solve hard problems. The research work presented here specifically addresses issues in privacy. The work was presented in 2014.
Le Ny, J.; Mohammady, M., “Differentially Private MIMO Filtering for Event Streams and Spatio-Temporal Monitoring,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2148, 2153, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039716
Abstract: Many large-scale systems such as intelligent transportation systems, smart grids or smart buildings collect data about the activities of their users to optimize their operations. In a typical scenario, signals originate from many sensors capturing events involving these users, and several statistics of interest need to be continuously published in real-time. Moreover, in order to encourage user participation, privacy issues need to be taken into consideration. This paper considers the problem of providing differential privacy guarantees for such multi-input multi-output systems operating continuously. We show in particular how to construct various extensions of the zero-forcing equalization mechanism, which we previously proposed for single-input single-output systems. We also describe an application to privately monitoring and forecasting occupancy in a building equipped with a dense network of motion detection sensors, which is useful for example to control its HVAC system.
Keywords: MIMO systems; filtering theory; sensors; HVAC system; differential privacy; differentially private MIMO filtering; event streams; intelligent transportation systems; large-scale systems; motion detection sensors; single-input single-output systems; smart buildings; smart grids; spatio temporal monitoring; zero-forcing equalization mechanism; Buildings; MIMO; Monitoring; Noise; Privacy; Sensitivity; Sensors (ID#: 15-5758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039716&isnumber=7039338
Distl, B.; Hossmann, T., “Privacy in Opportunistic Network Contact Graphs,” A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, vol., no., pp. 1, 3, 19-19 June 2014. doi:10.1109/WoWMoM.2014.6919020
Abstract: Opportunistic networks are formed by people carrying mobile devices with wireless capabilities. When in mutual transmission range, the nodes of such networks use device-to-device communication to automatically exchange data, without requiring fixed infrastructure. To solve challenging opportunistic networking problems like routing, nodes exchange information about whom they have met in the past and form a contact graph, which encodes the social structure of past meetings. This contact graph is then used to assign a utility to each node (e.g., based on their centrality), thereby defining a ranking of the nodes' values for carrying a message. However, while being a useful tool, the contact graph represents a privacy risk to the users, as it allows an attacker to learn about social links. In this paper, we investigate the trade-off of privacy and utility in the contact graph. By transforming the graph through adding and removing edges, we are able to control the amount of link privacy. The evaluation of a greedy approach shows that it maintains the node ranking very well, even if many links are changed.
Keywords: data privacy; graph theory; mobile computing; smart phones; telecommunication network routing; link privacy; node ranking; opportunistic network contact graphs; opportunistic network routing; past meeting recording; privacy risk; social structure recording; Approximation algorithms; Correlation; Greedy algorithms; Measurement; Mobile handsets; Privacy; Routing (ID#: 15-5759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919020&isnumber=6918912
Han Vinck, A.J.; Jivanyan, A.; Winzen, J., “Gaussian Fuzzy Commitment,” Information Theory and Its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 571, 574, 26-29 Oct. 2014. doi:(not provided)
Abstract: We discuss the protection of Gaussian biometric templates. We first introduce the Juels-Wattenberg for binary biometrics, where the binary biometrics are a result of hard-quantized Gaussian biometrics. The Juels-Wattenberg scheme adds a random binary code word to the biometric for privacy reasons and to allow errors in the biometric at authentication. We modify the Juels-Wattenberg scheme in such a way that we do not have to quantize the biometrics. We investigate and compare the performance of both approaches.
Keywords: Gaussian processes; authorisation; biometrics (access control); data privacy; fuzzy set theory; Gaussian biometric template protection; Gaussian fuzzy commitment; Juels-Wattenberg scheme; binary biometrics; hard-quantized Gaussian biometrics; random binary code word; Australia; Authentication; Decoding; Error analysis; Error correction codes; Noise; Vectors (ID#: 15-5760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979908&isnumber=6979787
Prasad, M.; Chou, K.P.; Saxena, A.; Kawrtiya, O.P.; Li, D.L.; Lin, C.T., “Collaborative Fuzzy Rule Learning for Mamdani Type Fuzzy Inference System with Mapping of Cluster Centers,” Computational Intelligence in Control and Automation (CICA), 2014 IEEE Symposium on, vol., no., pp. 1, 6, 9-12 Dec. 2014. doi:10.1109/CICA.2014.7013227
Abstract: This paper demonstrates a novel model for Mamdani type fuzzy inference system by using the knowledge learning ability of collaborative fuzzy clustering and rule learning capability of FCM. The collaboration process finds consistency between different datasets, these datasets can be generated at various places or same place with diverse environment containing common features space and bring together to find common features within them. For any kind of collaboration or integration of datasets, there is a need of keeping privacy and security at some level. By using collaboration process, it helps fuzzy inference system to define the accurate numbers of rules for structure learning and keeps the performance of system at satisfactory level while preserving the privacy and security of given datasets.
Keywords: fuzzy reasoning; fuzzy set theory; learning (artificial intelligence); pattern clustering; Mamdani type fuzzy inference system; cluster centers mapping; collaboration process; collaborative fuzzy clustering; collaborative fuzzy rule learning; knowledge learning ability; Brain modeling; Collaboration; Data models; Fuzzy logic; Knowledge based systems; Mathematical model; Prototypes; collaboration process; collaborative fuzzy clustering (CFC); fuzzy c-means (FCM); fuzzy inference system; privacy and security; structure learning (ID#: 15-5761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013227&isnumber=7013220
Ignatenko, T.; Willems, F.M.J., “Privacy-Leakage Codes for Biometric Authentication Systems,” Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, vol., no., pp. 1601, 1605, 4-9 May 2014. doi:10.1109/ICASSP.2014.6853868
Abstract: In biometric privacy-preserving authentication systems that are based on key-binding, two terminals observe two correlated biometric sequences. The first terminal selects a secret key, which is independent of the biometric data, binds this secret key to the observed biometric sequence and communicates it to the second terminal by sending a public message. This message should only contain a negligible amount of information about the secret key, but also leak as little as possible about the biometric data. Current approaches to realize such biometric systems use fuzzy commitment with codes that, given a secret-key rate, can only achieve the corresponding privacy-leakage rate equal to one minus this secret-key rate. However, the results in Willems and Ignatenko [2009] indicate that lower privacy leakage can be achieved if vector quantization is used at the encoder. In this paper we study the use of convolutional and turbo codes applied in fuzzy commitment and its modifications that realize this.
Keywords: biometrics (access control); convolutional codes; correlation theory; data privacy; fuzzy set theory; message authentication; sequential codes; turbo codes; vector quantisation; biometric authentication system; biometric privacy preserving authentication system; biometric sequence; convolutional codes; correlated biometric sequences; encoder; fuzzy commitment; privacy leakage codes; privacy leakage rate; public message sending; secret key rate; turbo codes; vector quantization; Authentication; Biometrics (access control); Convolutional codes; Decoding; Privacy; Quantization (signal); Signal to noise ratio; BCH codes; Biometric authentication; convolutional codes; privacy; turbo codes (ID#: 15-5762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853868&isnumber=6853544
Al-Abdulkarim, L.; Molin, E.; Lukszo, Z.; Fens, T., “Acceptance of ICT-Intensive Socio-Technical Infrastructure Systems: Smart Metering Case in the Netherlands,” Networking, Sensing and Control (ICNSC), 2014 IEEE 11th International Conference on, vol., no., pp. 399, 404, 7-9 April 2014. doi:10.1109/ICNSC.2014.6819659
Abstract: There are several initiatives worldwide to deploy SMs (SM). SM systems offer services aimed at achieving many goals beyond metering electricity consumption of households. Despite the advantages gained by SMs, there are serious issues that may lead to the system's inability to reach its goals. One obstacle, which can lead to social rejection of SMs, is perceived security and privacy violations of consumers' information. This poses a significant threat to a successful rollout and operation of the system as consumers represent a cornerstone in the fulfillment of goals such as energy efficiency and savings, by their active interaction with SMs. To investigate consumers' perception of SMs, theories and models from the technology acceptance literature can be used for understanding consumers' behaviors, and exploring possible factors that can have a significant impact on consumers' acceptance and usage of a SM. In this paper, a hybrid and extended model of a two well-known technology acceptance theories is presented. These theories are: the Unified Theory of Acceptance and Usage of Technology- UTAUT, and Innovation Diffusion Theory- IDT. The hybrid model is further extended with acceptance determinants derived from the Smart metering case in the Dutch context. The model aims to investigate determinants that can shed light on consumers' perception and acceptance of SM.
Keywords: consumer behaviour; domestic appliances; electricity supply industry; energy conservation; innovation management; power consumption; power system security; smart meters; Dutch context; ICT-intensive socio-technical infrastructure system; IDT; Netherlands; SM systems; UTAUT; acceptance determinants; consumer acceptance; consumer behaviors; consumer information; consumer perception; consumer usage; electricity consumption metering; energy efficiency; energy savings; households; innovation diffusion theory; privacy violations; security violations; smart metering case; social rejection; technology acceptance literature; technology acceptance theories; unified theory of acceptance and usage of technology; Reliability; System-on-chip; Critical infrastructures; Information security and privacy; Smart metering; Social acceptance; Socio-technical systems (ID#: 15-5763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819659&isnumber=6819588
Chi Chen; Chaogang Wang; Tengfei Yang; Dongdai Lin; Song Wang; Jiankun Hu, “Optional Multi-Biometric Cryptosystem Based on Fuzzy Extractor,” Fuzzy Systems and Knowledge Discovery (FSKD), 2014 11th International Conference on, vol., no., pp. 989, 994, 19-21 Aug. 2014. doi:10.1109/FSKD.2014.6980974
Abstract: Following the wide use of smart devices, biometric cryptosystem is used to protect users' privacy data. However, biometric cryptosystem is rarely used in the scenario of mobile cloud, because the biometric sensors are different on various devices. In this paper, an optional multi-biometric cryptosystem based on fuzzy extractor and secret share technology is proposed. Each of the enrolled biometric modality generates a feature vector, and then the feature vector is put into a fuzzy extractor to get a stable codeword, namely a bit-string. All the codewords are used to bind a random key based on a secret share method, and the key can be used to encrypt users' privacy data. During the verification phase, part of the enrolled biometric modalities are enough to recover the random key. Therefore, the proposed scheme can provide a user the same biometric key on different devices. In addition, experiment on a virtual multi-biometric database shows that the novel concept of optional multi-biometric cryptosystem is better than the corresponding uni-biometric cryptosystem both in matching accuracy and key entropy.
Keywords: biometrics (access control); cloud computing; cryptography; entropy; fuzzy set theory; mobile computing; vectors; bit-string; codewords; feature vector; fuzzy extractor; key entropy; mobile cloud; optional multibiometric cryptosystem; smart devices; users privacy data; Accuracy; Cryptography; Databases; Feature extraction; Fingerprint recognition; Iris recognition; cryptosystem; fuzzy extractor; key generation; mobile cloud; multi-biometric; secret share (ID#: 15-5764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980974&isnumber=6980796
Barber, R.F.; Duchi, J., “Privacy: A Few Definitional Aspects and Consequences for Minimax Mean-Squared Error,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1365, 1369, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039572
Abstract: We explore several definitions of “privacy” in statistical estimation and data analysis. We present and review definitions that attempt to capture what, intuitively, it should mean to limit disclosures from the output of a statistical estimation task, providing minimax upper and lower bounds on mean squared error for estimation problems under several common (and some new) definitions of privacy.
Keywords: data analysis; data privacy; estimation theory; mean square error methods; minimax techniques; statistical analysis; data analysis; data privacy; minimax mean-squared error; statistical estimation; Computer science; Convergence; Data analysis; Data privacy; Estimation; Privacy; Testing (ID#: 15-5765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039572&isnumber=7039338
Singh, K.; Jian Zhong; Batten, L.; Bertok, P., “A Solution for Privacy-Preserving, Remote Access to Sensitive Data,” Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 309, 313, 26-29 Oct. 2014. doi:(not provided)
Abstract: Sharing data containing sensitive information, such as medical records, always has privacy and security implications. In situations such as health environments, accurate individual data needs to be provided while at the same time, mass data release for medical research may also be required. This paper outlines a solution for maintaining the privacy of data released en masse in a controlled manner as well as for providing secure access to the original data for authorized users. Our solution maintains privacy in a more efficient manner than do previous solutions.
Keywords: data privacy; data sharing; remote access; sensitive data; sensitive information; Computer architecture; Data privacy; Encryption; Privacy; Protocols (ID#: 15-5766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979854&isnumber=6979787
Pradhan, P.; Venkitasubramaniam, P., “Under the Radar Attacks in Dynamical Systems: Adversarial Privacy Utility Tradeoffs,” Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 242, 246, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970829
Abstract: Cyber physical systems which integrate physical system dynamics with digital cyber infrastructure are envisioned to transform our core infrastructural frameworks such as the smart electricity grid, transportation networks and advanced manufacturing. This integration however exposes the physical system functioning to the security vulnerabilities of cyber communication. Both scientific studies and real world examples have demonstrated the impact of data injection attacks on state estimation mechanisms on the smart electricity grid. In this work, an abstract theoretical framework is proposed to study data injection/modification attacks on Markov modeled dynamical systems from the perspective of an adversary. Typical data injection attacks focus on one shot attacks by adversary and the non-detectability of such attacks under static assumptions. In this work we study dynamic data injection attacks where the adversary is capable of modifying a temporal sequence of data and the physical controller is equipped with prior statistical knowledge about the data arrival process to detect the presence of an adversary. The goal of the adversary is to modify the arrivals to minimize a utility function of the controller while minimizing the detectability of his presence as measured by the KL divergence between the prior and posterior distribution of the arriving data. Adversarial policies and tradeoffs between utility and detectability are characterized analytically using linearly solvable control optimization.
Keywords: Markov processes; radar; telecommunication security; Markov modeled dynamical systems; advanced manufacturing; adversarial privacy utility tradeoffs; core infrastructural frameworks; cyber communication; cyber physical systems; data arrival process; data injection attacks; digital cyber infrastructure; dynamic data injection attacks; dynamical systems; physical system dynamics; radar attacks; security vulnerabilities; smart electricity grid; state estimation mechanisms; temporal sequence; transportation networks; Markov processes; Mathematical model; Power system dynamics; Privacy; Process control; Smart grids; State estimation (ID#: 15-5767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970829&isnumber=6970773
Jiyun Yao; Venkitasubramaniam, P., “The Privacy Analysis of Battery Control Mechanisms in Demand Response: Revealing State Approach and Rate Distortion Bounds,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1377, 1382, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039594
Abstract: Perfect knowledge of a user's power consumption profile by a utility is a violation of privacy and can be detrimental to the successful implementation of demand response systems. It has been shown that an in-home energy storage system which provides a viable means to achieve the cost savings of instantaneous electricity pricing without inconvenience can also be used to maintain the privacy of a user's power profile. The optimization of the tradeoff between privacy, as measured by Shannon entropy, and cost savings that can be provided by a finite capacity battery with zero tolerance for delay is known to be equivalent to a Partially Observable Markov Decision Process with non linear belief dependent rewards- solutions to such systems suffer from high computational complexity. In this paper, we propose a “revealing state” approach to enable computation of a class of battery control policies that aim to maximize the achievable privacy of in-home demands. In addition, a rate-distortion approach is presented to derive upper bounds on the privacy-cost savings tradeoff of battery control policies. These bounds are derived for a discrete model, where demand and price follow i.i.d uniform distributions. Numerical results show that the derived bounds are quite close to each other demonstrating the efficacy of the proposed class of strategies.
Keywords: data privacy; demand side management; energy storage; rate distortion theory; secondary cells; stochastic systems; battery control mechanisms; demand response; in-home demands; in-home energy storage system; privacy analysis; privacy-cost savings tradeoff; rate distortion bounds; rate-distortion approach; revealing state approach; stochastic control; uniform distributions; Batteries; Electricity; Entropy; Optimization; Privacy; Upper bound; Demand Response; Entropy; Privacy; Random Walk; Scheduling; Storage; Utility (ID#: 15-5768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039594&isnumber=7039338
Pequito, S.; Kar, S.; Sundaram, S.; Aguiar, A.P., “Design of Communication Networks for Distributed Computation with Privacy Guarantees,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1370, 1376, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039593
Abstract: In this paper we address a communication network design problem for distributed computation with privacy guarantees. More precisely, given a possible communication graph between different agents in a network, the objective is to design a protocol, by proper selection of the weights in the dynamics induced by the communication graph, such that 1) weighted average consensus of the initial states of all the agents will be reached; and 2) there are privacy guarantees, where each agent is not able to retrieve the initial states of non-neighbor agents, with the exception of a small subset of agents (that will be precisely characterized). In this paper, we assume that the network is cooperative, i.e., each agent is passive in the sense that it executes the protocol correctly and does not provide incorrect information to its neighbors, but may try to retrieve the initial states of non-neighbor agents. Furthermore, we assume that each agent knows the communication protocol.
Keywords: cooperative communication; graph theory; multi-agent systems; protocols; communication graph; communication network design; communication protocol; cooperative network; distributed computation; network agent; nonneighbor agent; privacy guarantee; weighted average consensus; Bipartite graph; Computational modeling; Computers; Educational institutions; Privacy; Protocols; Tin (ID#: 15-5769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039593&isnumber=7039338
Papadopoulos, A.; Czap, L.; Fragouli, C., “Secret Message Capacity of a Line Network,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 1341, 1348, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028611
Abstract: We investigate the problem of information theoretically secure communication in a line network with erasure channels and state feedback. We consider a spectrum of cases for the private randomness that intermediate nodes can generate, ranging from having intermediate nodes generate unlimited private randomness, to having intermediate nodes generate no private randomness, and all cases in between. We characterize the secret message capacity when either only one of the channels is eavesdropped or all of the channels are eavesdropped, and we develop polynomial time algorithms that achieve these capacities. We also give an outer bound for the case where an arbitrary number of channels is eavesdropped. Our work is the first to characterize the secrecy capacity of a network of arbitrary size, with imperfect channels and feedback.
Keywords: channel capacity; computational complexity; data privacy; network theory (graphs); state feedback; telecommunication security; erasure channels; imperfect channels; information theoretically secure communication problem; intermediate nodes; line network; polynomial time algorithms; private randomness; secret message capacity; state feedback; Automatic repeat request; Random variables; Receivers; Relays; Security; State feedback; Vectors (ID#: 15-5770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028611&isnumber=7028426
Bounagui, Y.; Hafiddi, H.; Mezrioui, A., “Challenges for IT Based Cloud Computing Governance,” Intelligent Systems: Theories and Applications (SITA-14), 2014 9th International Conference on, vol., no., pp. 1, 8, 7-8 May 2014. doi:10.1109/SITA.2014.6847289
Abstract: For some years now, the concept of Cloud Computing (CC) is presented as the new revolution of information technology. It presents not only a technical innovation for better IT system flexibility, improvement of working methods and cost control, but also a new economic model, built around the concept of IT Services that are identifiable, classifiable and countable for end users, who can benefit by paying for use without having to make huge investments. In this paper, we show that despite these advantages, the implementation of such a concept has an impact on the enterprise stakeholders (IT Direction, Business Direction, Suppliers Direction, etc.). Many aspects must be managed differently from traditional systems. Availability, security, privacy and compliance are just some of the aspects that must be monitored and managed more effectively. Thus, the IT based CC governance is a necessity in terms of defining good management practices, especially because there is a lack of an adapted Governance Framework. The current IT governance practices/standards (ITIL, COBIT, ISO2700x, etc.) still have many limitations: they are far from covering an “end-to-end” governance; they are difficult to use and to maintain and have many overlapping points. It becomes mandatory for companies to address these challenges and control the capabilities offered by the CC, develop cloud oriented policies that reflect their exact needs and to have a flexible, coherent and global IT based CC Governance Framework.
Keywords: business data processing; cloud computing; information technology; CC; IT based cloud computing governance; IT direction; IT services; IT system flexibility; adapted governance framework; business direction; cost control; economic model; enterprise stakeholders; information technology; suppliers direction; technical innovation; Automation; Computational modeling; Organizations; Reliability; Software; Standards organizations; Cloud Computing; Framework ; IT Governance; Security; Standards (ID#: 15-5771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847289&isnumber=6846554
Tiits, M.; Kalvet, T.; Mikko, K.-L., “Social Acceptance of ePassports,” Biometrics Special Interest Group (BIOSIG), 2014 International Conference of the, vol, no., pp. 1, 6, 10-12 Sept. 2014. doi:(not provided)
Abstract: Using large-scale web survey in six countries we study the societal readiness and acceptance of specific technology options in relation to the potential next generation of ePassports. We find that the public has only limited knowledge of the electronic data and functions ePassports include, and often have no clear opinion on various potential uses for ePassports and related personal data. Still, the public expects from ePassports improvements in protection from document forgery, accuracy and reliability of the identification of persons, and protection from identity theft. The main risks the public associates with ePassports includes the possible use of personal information for purposes other than those initially stated, and covert surveillance. Compared to earlier studies, our research shows that issues of possible privacy invasion and abuse of information are much more perceived by the public. There is a weak correlation between a persons' level of knowledge about ePassports and their willingness to accept the use of advanced biometrics, such as fingerprints or eye iris images, in different identity management and identity checking scenarios. Furthermore, the public becomes more undecided about ePassport applications as we move from the basic state of the art towards more advanced biometric technologies in various scenarios. The successful pathway to greater acceptability of the use of advanced biometrics in ePassports should start from the introduction of perceivably high-benefit and low-risk applications. As the public awareness is low, citizens' belief in government benevolence, i.e. the belief that the government acts in citizens' best interest, comes out as an important factor in the overall context.
Keywords: biometrics (access control); data privacy; government data processing; social aspects of automation; biometrics; ePassports social acceptance; government benevolence; identity checking scenarios; identity management; information abuse; privacy invasion; Context; Fingerprint recognition; Government; Iris recognition; Logic gates; Security; ePassports; social acceptance; unified theory of acceptance and use of technology (ID#: 15-5773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029408&isnumber=7029401
Xi Chen; Luping Zheng; Zengli Liu; Jiashu Zhang, “Privacy-Preserving Biometrics Using Matrix Random Low-Rank Approximation Approach,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 6, 12, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013085
Abstract: In this paper, we propose a matrix random low-rank approximation (MRLRA) approach to generate cancelable biometric templates for privacy-preserving. MRLRA constructs a random low-rank matrix to approximate the hybridization of biometric feature and a random matrix. Theoretically analysis shows the distance between one cancelable low-rank biometric template by MRLRA and its original template is very small, which results to the verification and authentication performance by MRLRA is near that of original templates. Cancelable biometric templates by MRLRA conquer the weakness of random projection based cancelable biometric templates, in which the performance will deteriorate much under the same tokens. Experiments have verified that (i) cancelable biometric templates by MRLRA are sensitive to the user-specific tokens which are used for constructing the random matrix in MRLRA; (ii) MRLRA can reduce the noise of biometric templates; (iii) Even under the condition of same tokens, the performance of cancelable biometric templates by MRLRA doesn't deteriorate much.
Keywords: approximation theory; biometrics (access control); data privacy; formal verification; matrix algebra; MRLRA approach; authentication; hybridization; matrix random low-rank approximation approach; privacy-preserving biometrics; verification; Approximation methods; Authentication; Biometrics (access control); Databases; Face; Feature extraction; Vectors; Cancelable biometric templates; Matrix random low-rank approximation; Privacy-preserving (ID#: 15-5774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013085&isnumber=7013076
Zheng Yan; Mingjun Wang; Peng Zhang, “A Scheme to Secure Instant Community Data Access Based on Trust and Contexts,” Computer and Information Technology (CIT), 2014 IEEE International Conference on, vol., no., pp. 646, 651, 11-13 Sept. 2014. doi:10.1109/CIT.2014.136
Abstract: Mobile Ad Hoc Networks provides a generic platform for instant social networking (ISN), such as instant community (IC). For a crucial talk in an instant community, it is important to set up a secure communication channel among trustworthy members in order to avoid malicious eavesdropping or narrow down member communication scope. Previous work hasn't yet considered how to control social communication data access based on trust and other attributes and suffered from a weakness in terms of complexity. In this paper, we propose a scheme to secure instant community data access based on trust levels, contexts and time clock in a fine-grained control manner by applying Attribute-Based Encryption. Any community member can select other members with at least a minimum level of trust for secure ISN communications. The advantages, security and performance of the proposed scheme are evaluated and justified through extensive analysis, security proof and implementation. The results show the efficiency and effectiveness of our scheme.
Keywords: cryptography; mobile ad hoc networks; mobile computing; social networking (online); trusted computing; ISN; attribute-based encryption; data access security; instant social networking; mobile ad hoc networks; trust levels; Access control; Communities; Complexity theory; Encryption; Integrated circuits; Privacy preserving; data mining; data perturbation; k-anonymity (ID#: 15-5775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984726&isnumber=6984594
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cyber-Physical System Security and Privacy, 2014 Part 1 |
Cyber-Physical systems generally are systems where computers control physical entities. They exist in areas as diverse as automobiles, manufacturing, energy, transportation, chemistry, and computer appliances. In this bibliography, the primary focus of published research is in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources. Because of its strategic importance and the consequences of intrusion, smart grid is of particular importance to the Science of Security. The work presented here was published in 2014.
Armin Wasicek, Patricia Derler, Edward A. Lee. “Aspect-oriented Modeling of Attacks in Automotive Cyber-Physical Systems.” DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. doi:10.1145/2593069.2593095
Abstract: This paper introduces aspect-oriented modeling (AOM) as a powerful, model-based design technique to assess the security of Cyber-Physical Systems (CPS). Particularly in safety-critical CPS such as automotive control systems, the protection against malicious design and interaction faults is paramount to guaranteeing correctness and reliable operation. Essentially, attack models are associated with the CPS in an aspect-oriented manner to evaluate the system under attack. This modeling technique requires minimal changes to the model of the CPS. Using application-specific metrics, the designer can gain insights into the behavior of the CPS under attack.
Keywords: Aspect-oriented Modeling, Cyber-Physical Systems, Security (ID#: 15-5832)
URL: http://doi.acm.org/10.1145/2593069.2593095
Sven Wohlgemuth. “Is Privacy Supportive for Adaptive ICT Systems?” iiWAS '14 Proceedings of the 16th International Conference on Information Integration and Web-based Applications & Services, December 2014, Pages 559-570. doi:10.1145/2684200.2684363
Abstract: Adaptive ICT systems promise to improve resilience by re-using and sharing ICT services and information related to electronic identities and real-time requirements of business networking applications. The aim is to improve welfare and security of a society, e.g. a "smart" city. Even though adaptive ICT systems technically enable everyone to participate both as service consumer and provider without running the required technical infrastructure by oneself, uncertain knowledge on enforcement of legal, business, and social requirements impedes taking advantage of adaptive ICT systems. Not only IT risks on confidentiality and accountability are undecidable due to lack of control with the current trust infrastructure but also IT risks on integrity and availability due to lack of transparency. Reasons are insufficient quantification of IT risk as well as unacceptable knowledge on cause-and-effect relationships and accountability. This work introduces adaptive identity management to improve control and transparency for a trustworthy spontaneous information exchange as the critical activity of adaptive ICT systems.
Keywords: Adaptive ICT System, Game Theory, IT Risk Management, Identity Management, Multilateral IT Security, Privacy, Resilience, Security (ID#: 15-5833)
URL: http://doi.acm.org/10.1145/2684200.2684363
Jin Dong, Seddik M. Djouadi, James J. Nutaro, Teja Kuruganti. “Secure Control Systems with Application to Cyber-Physical Systems.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 9-12. doi:10.1145/2602087.2602094
Abstract: Control systems are computer-based systems with networked units consisting of sensors, actuators, control processing units, and communication devices. The role of control system is to interact, monitor, and control physical processes. Reactive power control is a fundamental issue in ensuring the security of the power network. It is claimed that Synchronous Condensers (SC) have been used at both distribution and transmission voltage levels to improve stability and to maintain voltages within desired limits under changing load conditions and contingency situations. Performance of PI controller corresponding to various tripping faults are analyzed for SC systems. Most of the effort in protecting these systems has been in protection against random failures or reliability. However, besides failures these systems are subject to various signal attacks for which new analysis are discussed here. When a breach does occur, it is necessary to react in a time commensurate with the physical dynamics of the system as it responds to the attack. Failure to act swiftly enough may result in undesirable, and possibly irreversible, physical effects. Therefore, it is meaningful to evaluate the security of a cyber-physical system, especially to protect it from cyber-attack. Illustrative numerical examples are provided together with an application to the SC systems.
Keywords: SCADA systems, cyber-physical systems, secure control, security (ID#: 15-5834)
URL: http://doi.acm.org/10.1145/2602087.2602094
Andrei Costin, Aurélien Francillon. “Short Paper: A Dangerous ‘Pyrotechnic Composition’: Fireworks, Embedded Wireless and Insecurity-by-Design.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy In Wireless & Mobile Networks, July 2014, Pages 57-62. doi:10.1145/2627393.2627401
Abstract: Fireworks are used around the world to salute popular events such as festivals, weddings, and public or private celebrations. Besides their entertaining effects fireworks are essentially colored explosives which are sometimes directly used as weapons. Modern fireworks systems heavily rely on 'wireless pyrotechnic firing systems'. Those 'embedded cyber-physical systems' (ECPS) are able to remotely control pyrotechnic composition ignition. The failure to properly secure these computer sub-systems may have disastrous, if not deadly, consequences. They rely on standardized wireless communications, off the shelf embedded hardware and custom firmware. In this short paper, we describe our experience in discovering and exploiting a wireless firing system in a short amount of time without any prior knowledge of such systems. In summary, we demonstrate our methodology starting from analysis of firmware, the discovery of vulnerabilities and finally by demonstrating a real world attack. Finally, we stress that the security of pyrotechnic firing systems should be considered seriously, which could be achieved through improved safety compliance requirements and control.
Keywords: embedded, exploitation, firing systems, security, vulnerabilities, wireless (ID#: 15-5835)
URL: http://doi.acm.org/10.1145/2627393.2627401
Marco Balduzzi, Alessandro Pasta, Kyle Wilhoit. “A Security Evaluation of AIS Automated Identification System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 436-445. doi:10.1145/2664243.2664257
Abstract: AIS, Automatic Identification System, is an application of cyber-physical systems (CPS) to smart transportation at sea. Being primarily used for collision avoidance and traffic monitoring by ship captains and maritime authorities, AIS is a mandatory installation for over 300,000 vessels worldwide since 2002. Other promoted benefits are accident investigation, aids to navigation and search and rescue (SAR) operations. In this paper, we present a unique security evaluation of AIS, by introducing threats affecting both the implementation in online providers and the protocol specification. Using a novel software-based AIS transmitter that we designed, we show that our findings affect all transponders deployed globally on vessels and other maritime stations like lighthouses, buoys, AIS gateways, vessel traffic services and aircraft involved in SAR operations. Our concerns have been acknowledged by online providers and international standards organizations, and we are currently and actively working together to improve the overall security.
Keywords: (not provided) (ID#: 15-5836)
URL: http://doi.acm.org/10.1145/2664243.2664257
Shivam Bhasin, Jean-Luc Danger, Tarik Graba, Yves Mathieu, Daisuke Fujimoto, Makoto Nagata. “Physical Security Evaluation at an Early Design-Phase: A Side-Channel Aware Simulation Methodology.” ES4CPS '14 Proceedings of International Workshop on Engineering Simulations for Cyber-Physical Systems, March 2014, Pages 13. doi:10.1145/2559627.2559628
Abstract: Cyber-Physical Systems (CPS) are often deployed in critical domains like health, traffic management etc. Therefore security is one of the major driving factor in development of CPS. In this paper, we focus on cryptographic hardware embedded in CPS and propose a simulation methodology to evaluate the security of these cryptographic hardware cores. Designers are often concerned about attacks like Side-Channel Analysis (SCA) which target the physical implementation of cryptography to compromise its security. SCA considers the physical "leakage" of a well chosen intermediate variable correlated with the secret. Certain countermeasures can be deployed, like dual-rail logic or masking, to resist SCA. However to design an effective countermeasure or to fix the vulnerable sources in a circuit, it is of prime importance for a designer to know the main leaking sources in the device. In practice, security of a circuit is evaluated only after the chip is fabricated followed by a certification process. If the circuit has security concerns, it should pass through all the design phases right from RTL to fabrication which increases time-to-market. In such a scenario, it is very helpful if a designer can determine the vulnerabilities early in the design cycle and fix them. In this paper, we present an evaluation of different strategies to verify the SCA robustness of a cryptographic circuit at different design steps, from the RTL to the final layout. We compare evaluation based on digital and electrical simulations in terms of speed and accuracy in a side-channel context. We show that a low-level digital simulation can be fast and sufficiently accurate for side-channel analysis.
Keywords: Design-Time security Evaluation, Side-Channel Analysis (ID#: 15-5837)
URL: http://doi.acm.org/10.1145/2559627.2559628
Lujo Bauer, Florian Kerschbaum. “What are the Most Important Challenges for Access Control in New Computing Domains, such as Mobile, Cloud and Cyber-Physical Systems?” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 127-128. doi:10.1145/2613087.2613090
Abstract: We are seeing a significant shift in the types and characteristics of computing devices that are commonly used. Today, more smartphones are sold than personal computers. An area of rapid growth are also cloud systems; and our everyday lives are invaded by sensors like smart meters and electronic tickets. The days when most computing resources were managed directly by a computer's operating system are over—data and computation is distributed, and devices are typically always connected via the Internet. In light of this shift, it is important to revisit the basic security properties we desire of computing systems and the mechanisms that we use to provide them. A building block of most of the security we enjoy in today's systems is access control. This panel will examine the challenges we face in adapting the access control models, techniques, and tools produced thus far to today's and tomorrow's computing environments. Key characteristics of these new systems that may require our approach to access control to change is that in many (e.g., cloud) systems users do not directly control their data; that a vast population of users operating mobile and other new devices has very little education in their use; and that cyber-physical systems permeate our environment to the point where they are often invisible to their users. Access control comprises enforcement systems, specification languages, and policy-management tools or approaches. In each of these areas the shifting computing landscape leaves us examining how current technology can be applied to new contexts or looking for new technology to fill the gap. Enforcement of access-control policy based on a trusted operating system, for example, does not cleanly translate to massively distributed, heterogeneous computing environments; to environments with many devices that are minimally administered or administered with minimal expertise; and to potentially untrusted clouds that hold sensitive data and computations that belong to entities other than the cloud owner. What technologies or system components should be the building blocks of enforcement in these settings?
Keywords: access control, challenges, panel (ID#: 15-5838)
URL: http://doi.acm.org/10.1145/2613087.2613090
Mayur Naik. “Large-Scale Configurable Static Analysis.” SOAP '14 Proceedings of the 3rd ACM SIGPLAN International Workshop on the State of the Art in Java Program Analysis, June 2014, Pages 1-1. doi:10.1145/2614628.2614635
Abstract: Program analyses developed over the last three decades have demonstrated the ability to prove non-trivial properties of real-world programs. This ability in turn has applications to emerging software challenges in security, software-defined networking, cyber-physical systems, and beyond. The diversity of such applications necessitates adapting the underlying program analyses to client needs, in aspects of scalability, applicability, and accuracy. Today's program analyses, however, do not provide useful tuning knobs. This talk presents a general computer-assisted approach to effectively adapt program analyses to diverse clients. The approach has three key ingredients. First, it poses optimization problems that expose a large set of choices to adapt various aspects of an analysis, such as its cost, the accuracy of its result, and the assumptions it makes about missing information. Second, it solves those optimization problems by new search algorithms that efficiently navigate large search spaces, reason in the presence of noise, interact with users, and learn across programs. Third, it comprises a program analysis platform that facilitates users to specify and compose analyses, enables search algorithms to reason about analyses, and allows using large-scale computing resources to parallelize analyses.
Keywords: (not provided) (ID#: 15-5839)
URL: http://doi.acm.org/10.1145/2614628.2614635
Anis Ben Aissa, Latifa Ben Arfa Rabai, Robert K. Abercrombie, Ali Mili, Frederick T. Sheldon. “Quantifying Availability in SCADA Environments Using the Cyber Security Metric MFC.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 81-84. doi:10.1145/2602087.2602103
Abstract: Supervisory Control and Data Acquisition (SCADA) systems are distributed networks dispersed over large geographic areas that aim to monitor and control industrial processes from remote areas and/or a centralized location. They are used in the management of critical infrastructures such as electric power generation, transmission and distribution, water and sewage, manufacturing/industrial manufacturing as well as oil and gas production. The availability of SCADA systems is tantamount to assuring safety, security and profitability. SCADA systems are the backbone of the national cyber-physical critical infrastructure. Herein, we explore the definition and quantification of an econometric measure of availability, as it applies to SCADA systems; our metric is a specialization of the generic measure of mean failure cost.
Keywords: MFC, SCADA, availability, dependability, security measures, security requirements, threats (ID#: 15-5840)
URL: http://doi.acm.org/10.1145/2602087.2602103
Teklemariam Tsegay Tesfay, Jean-Pierre Hubaux, Jean-Yves Le Boudec, Philippe Oechslin. “Cyber-Secure Communication Architecture for Active Power Distribution Networks. SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 545-552. doi:10.1145/2554850.2555082
Abstract: Active power distribution networks require sophisticated monitoring and control strategies for efficient energy management and automatic adaptive reconfiguration of the power infrastructure. Such requirements are realised by deploying a large number of various electronic automation and communication field devices, such as Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IEDs), and a reliable two-way communication infrastructure that facilitates transfer of sensor data and control signals. In this paper, we perform a detailed threat analysis in a typical active distribution network's automation system. We also propose mechanisms by which we can design a secure and reliable communication network for an active distribution network that is resilient to insider and outsider malicious attacks, natural disasters, and other unintended failure. The proposed security solution also guarantees that an attacker is not able to install a rogue field device by exploiting an emergency situation during islanding.
Keywords: PKI, active distribution network, authentication, islanding, smart grid, smart grid security, unauthorised access (ID#: 15-5841)
URL: http://doi.acm.org/10.1145/2554850.2555082
Mahdi Azimi, Ashkan Sami, Abdullah Khalili. “A Security Test-Bed for Industrial Control Systems.” MoSEMInA 2014 Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, May 2014, Pages 26-31. doi:10.1145/2593783.2593790
Abstract: Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.
Keywords: Dynamic Test, Industrial Control Systems, Network Test, Security, Static Test, Test-bed (ID#: 15-5842)
URL: http://doi.acm.org/10.1145/2593783.2593790
Bogdan D. Czejdo, Michael D. Iannacone, Robert A. Bridges, Erik M. Ferragut, John R. Goodall. “Integration of External Data Sources with Cyber Security Data Warehouse.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 49-52. doi:10.1145/2602087.2602098
Abstract: In this paper we discuss problems related to integration of external knowledge and data components with a cyber security data warehouse to improve situational understanding of enterprise networks. More specifically, network assessment and trend analysis can be enhanced by knowledge about most current vulnerabilities and external network events. The cyber security data warehouse can be modeled as a hierarchical graph of aggregations that captures data at multiple scales. Nodes of the graph, which are summarization tables, can be linked to external sources of information. We discuss problems related to timely information about vulnerabilities and how to integrate vulnerability ontology with cyber security network data.
Keywords: aggregation, anomaly detection, cyber security, natural language processing, network intrusion, situational understanding, vulnerability, vulnerability ontology (ID#: 15-5843)
URL: http://doi.acm.org/10.1145/2602087.2602098
Dina Hadžiosmanović, Robin Sommer, Emmanuele Zambon, Pieter H. Hartel. “Through the Eye of the PLC: Semantic Security Monitoring for Industrial Processes.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 126-135. doi:10.1145/2664243.2664277
Abstract: Off-the-shelf intrusion detection systems prove an ill fit for protecting industrial control systems, as they do not take their process semantics into account. Specifically, current systems fail to detect recent process control attacks that manifest as unauthorized changes to the configuration of a plant's programmable logic controllers (PLCs). In this work we present a detector that continuously tracks updates to corresponding process variables to then derive variable-specific prediction models as the basis for assessing future activity. Taking a specification-agnostic approach, we passively monitor plant activity by extracting variable updates from the devices' network communication. We evaluate the capabilities of our detection approach with traffic recorded at two operational water treatment plants serving a total of about one million people in two urban areas. We show that the proposed approach can detect direct attacks on process control, and we further explore its potential to identify more sophisticated indirect attacks on field device measurements as well.
Keywords: (not provided) (ID#: 15-5844)
URL: http://doi.acm.org/10.1145/2664243.2664277
Ting Liu, Yuhong Gui, Yanan Sun, Yang Liu, Yao Sun, Feng Xiao. “SEDE: State Estimation-Based Dynamic Encryption Scheme for Smart Grid Communication.” SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 539-544. doi:10.1145/2554850.2555033
Abstract: The vision of smart grid relies heavily on the communication technologies as they provide a desirable infrastructure for real-time measurement, transmission, decision and control. But various attacks such as eavesdropping, information tampering and malicious control command injection that are hampering the communication in Internet, would impose great threat on the security and stability of smart grids. In this paper, a State Estimation-based Dynamic Encryption (SEDE) scheme is proposed to secure the communication in smart grid. Several states of power system are employed as the common secrets to generate a symmetric key at both sides, which are measured on the terminals and calculated on the control center using state estimation. The advantages of SEDE are 1) the common secrets, used to generate symmetric key, are never exchanged in the network due to the state estimation, that observably improves the security of SEDE; 2) the measurement and state estimation are the essential functions on the terminals and control center in power system; 3) the functions, applied to encrypt and decrypt data, are simple and easy-implemented, such as XOR, Hash, rounding, etc. Thus, SEDE is considered as an inherent, light-weight and high-security encryption scheme for smart gird. In the experiments, SEDE is simulated on a 4-bus power system to demonstrate the process of state estimation, key generation and error correction.
Keywords: dynamic encryption, security, smart grid, state estimation (ID#: 15-5845)
URL: http://doi.acm.org/10.1145/2554850.2555033
Amel Bennaceur, Arosha K. Bandara, Michael Jackson, Wei Liu, Lionel Montrieux, Thein Than Tun, Yijun Yu, Bashar Nuseibeh. “Requirements-Driven Mediation for Collaborative Security.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 37-42. doi:10.1145/2593929.2593938
Abstract: Security is concerned with the protection of assets from intentional harm. Secure systems provide capabilities that enable such protection to satisfy some security requirements. In a world increasingly populated with mobile and ubiquitous computing technology, the scope and boundary of security systems can be uncertain and can change. A single functional component, or even multiple components individually, are often insufficient to satisfy complex security requirements on their own. Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. Collaborative security, which we propose in this paper, aims to exploit the selection and deployment of multiple, potentially heterogeneous, software-intensive components to collaborate in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities. However, the components that need to collaborate may not have been designed and implemented to interact with one another collaboratively. To address this, we propose a novel framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components that may be needed in order to achieve collaborative security. We elaborate on each of these framework elements, focusing in particular on the challenges and opportunities afforded by (1) the ability to capture, represent, and reason about the capabilities of different software components and their operational context, and (2) the ability of components to be selected and mediated at runtime in order to satisfy the security requirements. We illustrate our vision through a collaborative robotic implementation, and suggest some areas for future work.
Keywords: Security requirements, collaborative adaptation, mediation (ID#: 15-5846)
URL: http://doi.acm.org/10.1145/2593929.2593938
Liliana Pasquale, Carlo Ghezzi, Claudio Menghi, Christos Tsigkanos, Bashar Nuseibeh. “Topology Aware Adaptive Security.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 43-48. doi:10.1145/2593929.2593939
Abstract: Adaptive security systems aim to protect valuable assets in the face of changes in their operational environment. They do so by monitoring and analysing this environment, and deploying security functions that satisfy some protection (security, privacy, or forensic) requirements. In this paper, we suggest that a key characteristic for engineering adaptive security is the topology of the operational environment, which represents a physical and/or a digital space - including its structural relationships, such as containment, proximity, and reachability. For adaptive security, topology expresses a rich representation of context that can provide a system with both structural and semantic awareness of important contextual characteristics. These include the location of assets being protected or the proximity of potentially threatening agents that might harm them. Security-related actions, such as the physical movement of an actor from a room to another in a building, may be viewed as topological changes. The detection of a possible undesired topological change (such as an actor possessing a safe’s key entering the room where the safe is located) may lead to the decision to deploy a particular security control to protect the relevant asset. This position paper advocates topology awareness for more effective engineering of adaptive security. By monitoring changes in topology at runtime one can identify new or changing threats and attacks, and deploy adequate security controls accordingly. The paper elaborates on the notion of topology and provides a vision and research agenda on its role for systematically engineering adaptive security systems.
Keywords: Topology, adaptation, digital forensics, privacy, security (ID#: 15-5847)
URL: http://doi.acm.org/10.1145/2593929.2593939
Steven D. Fraser, Djenana Campara, Michael C. Fanning, Gary McGraw, Kevin Sullivan. “Privacy and Security in a Networked World.” SPLASH '14 Proceedings of the companion publication of the 2014 ACM SIGPLAN conference on Systems, Programming, and Applications: Software for Humanity, October 2014, Pages 43-45. doi:10.1145/2660252.2661294
Abstract: As news stories continue to demonstrate, ensuring adequate security and privacy in a networked "always on" world is a challenge; and while open source software can mitigate problems, it is not a panacea. This panel will bring together experts from industry and academia to debate, discuss, and offer opinions -- questions might include: What are the "costs" of "good enough" security and privacy on developers and customers? What is the appropriate trade-off between the price provide security and cost of poor security? How can the consequences of poor design and implementation be managed? Can systems be enabled to fail "security-safe"? What are the trade-offs for increased adoption of privacy and security best practices? How can the "costs" of privacy and security -- both tangible and intangible -- be reduced?
Keywords: cost, design, privacy, security, soft issues (ID#: 15-5848)
URL: http://doi.acm.org/10.1145/2660252.2661294
Qi Zhu, Peng Deng. “Design Synthesis and Optimization for Automotive Embedded Systems.” ISPD '14 Proceedings of the 2014 on International Symposium on Physical Design, March 2014, Pages 141-148. doi:10.1145/2560519.2565873
Abstract: Embedded software and electronics are major contributors of values in vehicles, and play a dominant role in vehicle innovations. The design of automotive embedded systems has become more and more challenging, with the rapid increase of system complexity and more requirements on various design objectives. Methodologies such as model-based design are being adopted to improve design quality and productivity through the usage of functional models. However, there is still a significant lack of design automation tools, in particular synthesis and optimization tools, that can turn complex functional specifications to correct and optimal software implementations on distributed embedded platforms. In this paper, we discuss some of the major technical challenges and the problems to be solved in automotive embedded systems design, especially for the synthesis and optimization of embedded software.
Keywords: automotive embedded systems, design automation, software synthesis and optimization (ID#: 15-5849)
URL: http://doi.acm.org/10.1145/2560519.2565873
Chen Liu, Chengmo Yang, Yuanqi Shen. “Leveraging Microarchitectural Side Channel Information to Efficiently Enhance Program Control Flow Integrity.” CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 5. doi:10.1145/2656075.2656092
Abstract: Stack buffer overflow is a serious security threat to program execution. A malicious attacker may overwrite the return address of a procedure to alter its control flow and hence change its functionality. While a number of hardware and/or software based protection schemes have been developed, these countermeasures introduce sizable overhead in performance and energy, thus limiting their applicability to embedded systems. To reduce such overhead, our goal is to develop a low-cost scheme to "filter out" potential stack buffer overflow attacks. Our observation is that attacks to control flow will trigger certain microarchitectural events, such as mis-predictions in the return address stack or misses in the instruction cache. We therefore propose a hardware-based scheme to monitor these events. Only upon detecting any suspicious behavior, a more precise but costly diagnosis scheme will be invoked to thoroughly check control flow integrity. Meanwhile, to further reduce the rate of false positives of the security filter, we propose three enhancements to the return address stack, instruction prefetch engine and instruction cache, respectively. The results show that these enhancements effectively reduce more than 95% of false positives with almost no false negatives introduced.
Keywords: instruction cache, return address stack, security, stack buffer overflow (ID#: 15-5850)
URL: http://doi.acm.org/10.1145/2656075.2656092
Jakob Axelsson, Avenir Kobetski. “Architectural Concepts for Federated Embedded Systems.” ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops, August 2014, Article No. 25. doi:10.1145/2642803.2647716
Abstract: Federated embedded systems (FES) is an approach for systems-of-systems engineering in the domain of cyber-physical systems. It is based on the idea to allow dynamic addition of plug-in software in the embedded system of a product, and through communication between the plug-ins in different products, it becomes possible to build services on the level of a federation of products. In this paper, architectural concerns for FES are elicited, and are used as rationale for a number of decisions in the architecture of products that are enabled for FES, as well as in the application architecture of a federation. A concrete implementation of a FES from the automotive domain is also described, as a validation of the architectural concepts presented.
Keywords: Systems-of-systems, cyber-physical systems, federated embedded systems, system architecture (ID#: 15-5851)
URL: http://doi.acm.org/10.1145/2642803.2647716
Jurgo Preden. “Generating Situation Awareness in Cyber-Physical Systems: Creation and Exchange of Situational Information.” CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 21. doi:10.1145/2656075.2661647
Abstract: Cyber-physical systems depend on good situation awareness in order to cope with the changes of the physical world and in the configuration of the system to fulfill their goal functions. Being aware of the situation in the physical world enables a cyber-physical system to adapt its behaviour according to the actual state of the world as perceived by the cyber-physical system. Understanding the situation of the cyber-physical system itself enables adaptation of the behaviour of the system according to the current capabilities and state of the system, e.g., providing less features or features with limited functionality in case some of the system components are not functional. In order to build resilient cyber-physical systems we need to build systems that are able to consider both of these aspects in their operation.
Keywords: cyber physical system, situation awareness (ID#: 15-5852)
URL: http://doi.acm.org/10.1145/2656075.2661647
Kaliappa Ravindran, Ramesh Sethu. “Model-Based Design of Cyber-Physical Software Systems for Smart Worlds: A Software Engineering Perspective.” MoSEMInA 2014 Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, May 2014, Pages 62-71. doi:10.1145/2593783.2593785
Abstract: The paper discusses the design of cyber-physical systems software around intelligent physical worlds (IPW). An IPW is the embodiment of control software functions wrapped around the external world processes, exhibiting self-adaptive behavior over a limited operating region of the system. This is in contrast with the traditional models where the physical world is basically dumb. A self-adaptation of IPW is feasible when certain system properties hold: function separability and piece-wise linearity of system behavioral models. The IPW interacts with an intelligent computational world (ICW) to work over wide range of operating conditions, by patching itself with suitable control parameters and rules & procedures relevant to a changed condition. The modular decomposition of a complex adaptive system into IPW and ICW has many advantages: lowering overall software complexity, simplifying system verification, and supporting easier evolution of system features. The paper illuminates our concept of IPW with software engineering-oriented case study of an industrial application: automotive system.
Keywords: Cyber-physical system, Hierarchical control, Self-managing system, Software module reuse, System feature evolution (ID#: 15-5853)
URL: http://doi.acm.org/10.1145/2593783.2593785
Nikola Trcka, Mark Moulin, Shaunak Bopardikar, Alberto Speranzon. “A Formal Verification Approach to Revealing Stealth Attacks on Networked Control Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 67-76. doi:10.1145/2566468.2566484
Abstract: We develop methods to determine if networked control systems can be compromised by stealth attacks, and derive design strategies to secure these systems. A stealth attack is a form of a cyber-physical attack where the adversary compromises the information between the plant and the controller, with the intention to drive the system into a bad state and at the same time stay undetected. We define the discovery problem as a formal verification problem, where generated counterexamples (if any) correspond to actual attack vectors. The analysis is entirely performed in Simulink, using Simulink Design Verifier as the verification engine. A small case study is presented to illustrate the results, and a branch-and-bound algorithm is proposed to perform optimal system securing.
Keywords: control system, cyber-physical security, formal verification (ID#: 15-5854)
URL: http://doi.acm.org/10.1145/2566468.2566484
Jakub Szefer, Pramod Jamkhedkar, Diego Perez-Botero, Ruby B. Lee. “Cyber Defenses for Physical Attacks and Insider Threats in Cloud Computing.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 519-524. doi:10.1145/2590296.2590310
Abstract: In cloud computing, most of the computations and data in the data center do not belong to the cloud provider. This leaves owners of applications and data concerned about cyber and physical attacks which may compromise the confidentiality, integrity or availability of their applications or data. While much work has looked at protection from software (cyber) threats, very few have looked at physical attacks and physical security in data centers. In this work, we present a novel set of cyber defense strategies for physical attacks in data centers. We capitalize on the fact that physical attackers are constrained by the physical layout and other features of a data center which provide a time delay before an attacker can reach a server to launch a physical attack, even by an insider. We describe how a number of cyber defense strategies can be activated when an attack is detected, some of which can even take effect before the actual attack occurs. The defense strategies provide improved security and are more cost-effective than always-on protections in the light of the fact that on average physical attacks will not happen often -- but can be very damaging when they do occur.
Keywords: cloning, cloud computing, data center security, insider threats, migration, physical attacks (ID#: 15-5855)
URL: http://doi.acm.org/10.1145/2590296.2590310
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cyber-Physical System Security and Privacy, 2014 Part 2 |
Cyber-Physical systems generally are systems where computers control physical entities. They exist in areas as diverse as automobiles, manufacturing, energy, transportation, chemistry, and computer appliances. In this bibliography, the primary focus of published research is in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources. Because of its strategic importance and the consequences of intrusion, smart grid is of particular importance to the Science of Security. The work presented here was published in 2014.
Francisco Javier Acosta Padilla, Frederic Weis, Johann Bourcier. “Towards a Model@Runtime Middleware for Cyber Physical Systems.” MW4NG '14 Proceedings of the 9th Workshop on Middleware for Next Generation Internet Computing, December 2014, Article No. 6. doi:10.1145/2676733.2676741
Abstract: Cyber Physical Systems (CPS) or Internet of Things systems are typically formed by a myriad of many small interconnected devices. This underlying hardware infrastructure raises new challenges in the way we administrate the software layer of these systems. Indeed, the limited computing power and battery life of each node combined with the very distributed nature of these systems, greatly adds complexity to distributed software layer management. In this paper we propose a new middleware dedicated to CPS to enable the management of software deployment and the dynamic reconfiguration of these systems. Our middleware is inspired from the Component Based Systems and the model@runtime paradigm which has been adapted to the context of Cyber Physical Systems. We have conducted an initial evaluation on a typical Cyber Physical Systems hardware infrastructure which demonstrates the feasibility of providing a model@runtime middleware for these systems.
Keywords: MDE, adaptive systems, cyber physical systems, middleware, models (ID#: 15-5856)
URL: http://doi.acm.org/10.1145/2676733.2676741
Mohammad Ashiqur Rahman, Ehab Al-Shaer, Rakesh B. Bobba. “Moving Target Defense for Hardening the Security of the Power System State Estimation.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 59-68. doi:10.1145/2663474.2663482
Abstract: State estimation plays a critically important role in ensuring the secure and reliable operation of the electric grid. Recent works have shown that the state estimation process is vulnerable to stealthy attacks where an adversary can alter certain measurements to corrupt the solution of the process, but evade the existing bad data detection algorithms and remain invisible to the system operator. Since the state estimation result is used to compute optimal power flow and perform contingency analysis, incorrect estimation can undermine economic and secure system operation. However, an adversary needs sufficient resources as well as necessary knowledge to achieve a desired attack outcome. The knowledge that is required to launch an attack mainly includes the measurements considered in state estimation, the connectivity among the buses, and the power line admittances. Uncertainty in information limits the potential attack space for an attacker. This advantage of uncertainty enables us to apply moving target defense (MTD) strategies for developing a proactive defense mechanism for state estimation. In this paper, we propose an MTD mechanism for securing state estimation, which has several characteristics: (i) increase the knowledge uncertainty for attackers, (ii) reduce the window of attack opportunity, and (iii) increase the attack cost. In this mechanism, we apply controlled randomization on the power grid system properties, mainly on the set of measurements that are considered in state estimation, and the topology, especially the line admittances. We thoroughly analyze the performance of the proposed mechanism on the standard IEEE 14- and 30-bus test systems.
Keywords: false data injection attack, moving target defense, power grid, state estimation (ID#: 15-5857)
URL: http://doi.acm.org/10.1145/2663474.2663482
Fahad Javed, Usman Ali, Muhammad Nabeel, Qasim Khalid, Naveed Arshad, Jahangir Ikram. “SmartDSM: A Layered Model for Development of Demand Side Management in Smart Grids.” SE4SG 2014 Proceedings of the 3rd International Workshop on Software Engineering Challenges for the Smart Grid, June 2014, Pages 15-20. doi:10.1145/2593845.2593848
Abstract: Growing power demand and carbon emissions is motivating utility providers to introduce smart power systems. One of the most promising technology to deliver cheaper and smarter electricity is demand side management. A DSM solution controls the devices at user premises in order to achieve overall goals of lower cost for consumer and utility. To achieve this various technologies from different domains come in to play from power electronics to sensor networks to machine learning and distributed systems design. The eventual system is a large, distributed software system over a heterogeneous environment and systems. Whereas various algorithms to plan the DSM schedule have been proposed, no concerted effort has been made to propose models and architectures to develop such a complex software system. This lack of models provides for a haphazard landscape for researchers and practitioners leading to confused requirements and overlapping concerns of domains. This was observed by the authors in developing a DSM system for their lab and faculty housing. To this end in this paper we present a model to develop software systems to deliver DSM. In addition to the model, we present a road map of software engineering research to aid development of future DSM systems. This is based on our observations and insights of the developed DSM systems.
Keywords: Smart grids, demand side management, model driven design, software engineering (ID#: 15-5858)
URL: http://doi.acm.org/10.1145/2593845.2593848
Rafael Oliveira Vasconcelos, Igor Vasconcelos, Markus Endler. “A Middleware for Managing Dynamic Software Adaptation.” ARM '14 Proceedings of the 13th Workshop on Adaptive and Reflective Middleware, December 2014, Article No. 5. doi:10.1145/2677017.2677022
Abstract: The design and development of adaptive systems brings new challenges since the dynamism of such systems is a multifaceted concern that range from mechanisms to enable the adaptation on the software level to the (self-) management of the entire system using adaptation plans or system administrator, for instance. Networked and mobile embedded systems are examples of systems where dynamic adaptation become even more necessary as the applications must be capable of discovering the computing resources in their near environment. While most of the current research is concerned with low-level adaptation techniques (i.e., how to dynamically deploy new components or change parameters), we are focused in providing management of distributed dynamic adaptation and facilitating the development of adaptation plans. In this paper, we present a middleware tailored for mobile embedded systems that supports distributed dynamic software adaptation, in transactional and non-transactional fashion, among mobile devices. We also present results of initial evaluation.
Keywords: adaptability, dynamic adaptation, middleware, mobile communication, self-adaptive systems (ID#: 15-5859)
URL: http://doi.acm.org/10.1145/2677017.2677022
Wei Gong, Yunhao Liu, Amiya Nayak, Cheng Wang. “Wise Counting: Fast and Efficient Batch Authentication for Large-Scale RFID Systems.” MobiHoc '14 Proceedings of the 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing, August 2014, Pages 347-356. doi:10.1145/2632951.2632963
Abstract: Radio Frequency Identification technology (RFID) is widely used in many applications, such as asset monitoring, e-passport and electronic payment, and is becoming one of the most effective solutions in cyber physical system. Since the identification alone does not provide any guarantee that tag corresponds to genuine identity, authentication of tag information is needed in most RFID systems. Meanwhile, as the number of tags is rapidly growing in recent years, per-tag based methods suffer from severely low efficiency and thus give way to probabilistic batch authentication. Most previous methods, however, share a common drawback from statistical perspective: they fail to explore correlation information, i.e., they do not comprehensively utilize all the information in authentication data structures. In addition, those schemes are not scalable well when multiple tag sets need to be verified simultaneously. In this paper, we propose a fast and efficient batch authentication scheme, Wise Counting (WIC), for large-scale RFID systems. We are the first to formally introduce the general batch authentication problem with multiple tag sets and give counterfeits estimation scheme with high efficiency. By employing a novel hierarchical authentication structure, we show that WIC is able to fast and efficiently authenticate both a single tag set and multiple tag sets in an easy, intuitive way. Through detailed theoretical analysis and extensive simulations, we validate the design of WIC and demonstrate its large superiority over state-of-the art approaches.
Keywords: RFID tags, batch authentication, counterfeits estimation, hierarchical data structure (ID#: 15-5860)
URL: http://doi.acm.org/10.1145/2632951.2632963
Ze Ni, Avenir Kobetski, Jakob Axelsson. “Design and Implementation of a Dynamic Component Model for Federated AUTOSAR Systems.” DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. doi:10.1145/2593069.2593121
Abstract: The automotive industry has recently agreed upon the embedded software standard AUTOSAR, which structures an application into reusable components that can be deployed using a configuration scheme. However, this configuration takes place at design time, with no provision for dynamically installing components to reconfigure the system. In this paper, we present the design and implementation of a dynamic component model that extends AUTOSAR with the possibility to add plug-in components at runtime. This opens up for shorter deployment time for new functions; opportunities for vehicles to participate in federated embedded systems; and involvement of third-party software developers.
Keywords: AUTOSAR, Dynamically Reconfigurable Software, Federated Embedded Systems, Software Components (ID#: 15-5861)
URL: http://doi.acm.org/10.1145/2593069.2593121
Stefan Wagner. “Scrum for Cyber-Physical Systems: A Process Proposal.” RCoSE 2014 Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering, June 2014, Pages 51-56. doi:10.1145/2593812.2593819
Abstract: Agile development processes and especially Scrum are changing the state of the practice in software development. Many companies in the classical IT sector have adopted them to successfully tackle various challenges from the rapidly changing environments and increasingly complex software systems. Companies developing software for embedded or cyber-physical systems, however, are still hesitant to adopt such processes. Despite successful applications of Scrum and other agile methods for cyber-physical systems, there is still no complete process that maps their specific challenges to practices in Scrum. We propose to fill this gap by treating all design artefacts in such a development in the same way: In software development, the final design is already the product, in hardware and mechanics it is the starting point of production. We sketch the Scrum extension Scrum CPS by showing how Scrum could be used to develop all design artefacts for a cyber physical system. Hardware and mechanical parts that might not be available yet are simulated. With this approach, we can directly and iteratively build the final software and produce detailed models for the hardware and mechanics production in parallel. We plan to further detail Scrum CPS and apply it first in a series of student projects to gather more experience before testing it in an industrial case study.
Keywords: Agile, Cyber-physical, Scrum (ID#: 15-5862)
URL: http://doi.acm.org/10.1145/2593812.2593819
Kasper Luckow, Corina S. Păsăreanu, Matthew B. Dwyer, Antonio Filieri, Willem Visser. “Exact and Approximate Probabilistic Symbolic Execution for Nondeterministic Programs.” ASE '14 Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, September 2014, Pages 575-586. doi:10.1145/2642937.2643011
Abstract: Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of-the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Keywords: nondeterministic programs, probabilistic software analysis, symbolic execution (ID#: 15-5863)
URL: http://doi.acm.org/10.1145/2642937.2643011
Philipp Diebold, Constanza Lampasona, Sergey Zverlov, Sebastian Voss. “Practitioners' and Researchers' Expectations on Design Space Exploration for Multicore Systems in the Automotive and Avionics Domains: A Survey.” EASE '14 Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, May 2014, Article No. 1. doi:10.1145/2601248.2601250
Abstract: Background: The mobility domains are moving towards the adoption of multicore technology. Appropriate methods, techniques, and tools need to be developed or adapted in order to fulfill the existing requirements. This is a case for design space exploration methods and tools. Objective: Our goal was to understand the importance of different design space exploration goals with respect to their relevance, frequency of use, and tool support required in the development of multicore systems from the point of view of the ARAMiS project members. Our aim was to use the results to guide further work in the project. Method: We conducted a survey regarding the current state of the art in design space exploration in industry and research and collected the expectations of project members regarding design space exploration goals. Results: The results show that design space exploration is an important topic in industry as well as in research. It is used very often with different important goals to optimize the system. Conclusions: Current tools provide only partial solutions for design space exploration. Our results can be used for improving them and guiding their development according to the priorities explained in this contribution.
Keywords: automotive, avionics, design space exploration, industry, multicore, research, survey (ID#: 15-5864)
URL: http://doi.acm.org/10.1145/2601248.2601250
Sandeep Neema, Gabor Simko, Tihamer Levendovszky, Joseph Porter, Akshay Agrawal, Janos Sztipanovits. “Formalization of Software Models for Cyber-Physical Systems.” FormaliSE 2014 Proceedings of the 2nd FME Workshop on Formal Methods in Software Engineering, June 2014, Pages 45-51. doi:10.1145/2593489.2593495
Abstract: The involvement of formal methods is indispensable for modern software engineering. This especially holds for Cyber-Physical Systems (CPS). In order to deal with the complexity and heterogeneity of the design, model-based engineering is widely used. The complexity of detailed verification in the final source code makes it imperative to introduce formal methods earlier in the design process. Because of the widespread use of customized modeling languages (domain-specific modeling languages, DSMLs), it is crucial to formally specify the DSML, and verify if the model meets fundamental correctness criteria. This is achieved by specifying behavioral and structural semantics of the modeling language. Significant model-driven tools have emerged incorporating advanced model checking methods that can provide some assurance regarding the quality and correctness of the models. However, the code generated from these models, using auto code generators remains circumspect, since the correctness of the code generators cannot be assumed as a given, and remains intractable to prove. Therefore, we propose a pragmatic approach, instead of verifying explicit implementation of code generator, verifies the correctness of the generated code with respect to a specific set of user-defined properties to establish that the code-generators are property-preserving. In order to make the verification workflow conducive to domain engineers, who are not often trained in formal methods, we include a mechanism for high-level specification of temporal properties using pattern-based verification templates. The presented toolchain leverages state-of-the-art verification tools, and a small case-study illustrates the approach.
Keywords: Cyber-Physical Systems, Model-Integrated Computing, Semantic Specification (ID#: 15-5865)
URL: http://doi.acm.org/10.1145/2593489.2593495
Ivan Ruchkin, Dionisio De Niz, David Garlan, Sagar Chaki. “Contract-Based Integration of Cyber-Physical Analyses.” EMSOFT '14 Proceedings of the 14th International Conference on Embedded Software, October 2014, Article No. 23. doi:10.1145/2656045.2656052
Abstract: Developing cyber-physical systems involves multiple engineering domains, e.g., timing, logical correctness, thermal resilience, and mechanical stress. In today's industrial practice, these domains rely on multiple analyses to obtain and verify critical system properties. Domain differences make the analyses abstract away interactions among themselves, potentially invalidating the results. Specifically, one challenge is to ensure that an analysis is never applied to a model that violates the assumptions of the analysis. Since such violation can originate from the updating of the model by another analysis, analyses must be executed in the correct order. Another challenge is to apply diverse analyses soundly and scalably over models of realistic complexity. To address these challenges, we develop an analysis integration approach that uses contracts to specify dependencies between analyses, determine their correct orders of application, and specify and verify applicability conditions in multiple domains. We implement our approach and demonstrate its effectiveness, scalability, and extensibility through a verification case study for thread and battery cell scheduling.
Keywords: analysis, analysis contracts, battery scheduling, cyber-physical systems, model checking, real-time scheduling, thermal runaway, virtual integration (ID#: 15-5866)
URL: http://doi.acm.org/10.1145/2656045.2656052
Tomas Bures, Petr Hnetynka, Frantisek Plasil. “Strengthening Architectures of Smart CPS by Modeling Them as Runtime Product-Lines.” CBSE '14 Proceedings of the 17th International ACM Sigsoft Symposium on Component-Based Software Engineering, June 2014, Pages 91-96. doi:10.1145/2602458.2602478
Abstract: Smart Cyber-Physical Systems (CPS) are complex distributed decentralized systems of cooperating mobile and stationary devices which closely interact with the physical environment. Although Component-Based Development (CBD) might seem as a viable solution to target the complexity of smart CPS, existing component models scarcely cope with the open-ended and very dynamic nature of smart CPS. This is especially true for design-time modeling using hierarchical explicit architectures, which traditionally provide an excellent means of coping with complexity by providing multiple levels of abstractions and explicitly specifying communication links between component instances. In this paper we propose a modeling method (materialized in the SOFA NG component model) which conveys the benefits of explicit architectures of hierarchical components to the design of smart CPS. Specifically, we base our method on modeling systems as reference architectures of Software Product Lines (SPL). Contrary to traditional SPL, which is a fully design-time approach, we create SPL configurations at runtime. We do so in a decentralized way by translating the configuration process to the process of establishing component ensembles (i.e. dynamic cooperation groups of components) of our DEECo component model.
Keywords: component model, component-based development, cyber-physical systems, software architecture, software components (ID#: 15-5867)
URL: http://doi.acm.org/10.1145/2602458.2602478
Ashish Tiwari, Bruno Dutertre, Dejan Jovanović, Thomas de Candia, Patrick D. Lincoln, John Rushby, Dorsa Sadigh, Sanjit Seshia. “Safety Envelope for Security.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 85-94. doi:10.1145/2566468.2566483
Abstract: We present an approach for detecting sensor spoofing attacks on a cyber-physical system. Our approach consists of two steps. In the first step, we construct a safety envelope of the system. Under nominal conditions (that is, when there are no attacks), the system always stays inside its safety envelope. In the second step, we build an attack detector: a monitor that executes synchronously with the system and raises an alarm whenever the system state falls outside the safety envelope. We synthesize safety envelopes using a modifed machine learning procedure applied on data collected from the system when it is not under attack. We present experimental results that show effectiveness of our approach, and also validate the several novel features that we introduced in our learning procedure.
Keywords: hybrid systems, invariants, safety envelopes, security (ID#: 15-5868)
URL: http://doi.acm.org/10.1145/2566468.2566483
Zhi Li, Lu Chen. “System-Level Testing of Cyber-Physical Systems Based on Problem Concerns.” EAST 2014 Proceedings of the 2014 3rd International Workshop on Evidential Assessment of Software Technologies, May 2014, Pages 60-62. doi:10.1145/2627508.2627511
Abstract: In this paper we propose a problem-oriented approach to system-level testing of cyber-physical systems based on Jackson’s notion of problem concerns. Some close associations between problem concerns and potential faults in the problem space are made, which necessitates system-level testing. Finally, a research agenda has been put forward with the goal of building a repository of system faults and mining particular problem concerns for system-level testing.
Keywords: Problem Frames, problem concerns, system-level testing (ID#: 15-5869)
URL: http://doi.acm.org/10.1145/2627508.2627511
Carlos Barreto, Alvaro A. Cárdenas, Nicanor Quijano, Eduardo Mojica-Nava. “CPS: Market Analysis of Attacks Against Demand Response in the Smart Grid.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 136-145. doi:10.1145/2664243.2664284
Abstract: Demand response systems assume an electricity retail-market with strategic electricity consuming agents. The goal in these systems is to design load shaping mechanisms to achieve efficiency of resources and customer satisfaction. Recent research efforts have studied the impact of integrity attacks in simplified versions of the demand response problem, where neither the load consuming agents nor the adversary are strategic. In this paper, we study the impact of integrity attacks considering strategic players (a social planner or a consumer) and a strategic attacker. We identify two types of attackers: (1) a malicious attacker who wants to damage the equipment in the power grid by producing sudden overloads, and (2) a selfish attacker that wants to defraud the system by compromising and then manipulating control (load shaping) signals. We then explore the resiliency of two different demand response systems to these fraudsters and malicious attackers. Our results provide guidelines for system operators deciding which type of demand-response system they want to implement, how to secure them, and directions for detecting these attacks.
Keywords: (not provided) (ID#: 15-5870)
URL: http://doi.acm.org/10.1145/2664243.2664284
Bader Alwasel, Stephen D. Wolthusen. “Reconstruction of Structural Controllability over Erdős-Rényi Graphs via Power Dominating Sets.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 57-60. doi:10.1145/2602087.2602095
Abstract: Controllability, or informally the ability to force a system into a desired state in a finite time or number of steps, is a fundamental problem studied extensively in control systems theory with structural controllability recently gaining renewed interest. In distributed control systems, possible control relations are limited by the underlying network (graph) transmitting the control signals from a single controller or set of controllers. Attackers may seek to disrupt these relations or compromise intermediate nodes, thereby gaining partial or total control. For a defender to re-gain full or partial control, it is therefore critical to rapidly reconstruct the control graph as far as possible. Failing to achieve this may allow the attacker to cause further disruptions, and may --- as in the case of electric power networks --- also violate real-time constraints leading to catastrophic loss of control. However, as this problem is known to be computationally hard, approximations are required particularly for larger graphs. We therefore propose a reconstruction algorithm for (directed) control graphs of bounded tree width embedded in Erdős-Rényi random graphs based on recent work by Aazami and Stilp as well as Guo et al.
Keywords: power dominating sets, recovery from attacks, robustness of control systems and networks, structural controllability (ID#: 15-5871)
URL: http://doi.acm.org/10.1145/2602087.2602095
Der-Yeuan Yu, Aanjhan Ranganathan, Thomas Locher, Srdjan Capkun, David Basin. “Short Paper: Detection of GPS Spoofing Attacks in Power Grids.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks. July 2014, Pages 99-104. doi:10.1145/2627393.2627398
Abstract: Power companies are deploying a multitude of sensors to monitor the energy grid. Measurements at different locations should be aligned in time to obtain the global state of the grid, and the industry therefore uses GPS as a common clock source. However, these sensors are exposed to GPS time spoofing attacks that cause misaligned aggregated measurements, leading to inaccurate monitoring that affects power stability and line fault contingencies. In this paper, we analyze the resilience of phasor measurement sensors, which record voltages and currents, to GPS spoofing performed by an adversary external to the system. We propose a solution that leverages the characteristics of multiple sensors in the power grid to limit the feasibility of such attacks. In order to increase the robustness of wide-area power grid monitoring, we evaluate mechanisms that allow collaboration among GPS receivers to detect spoofing attacks. We apply multilateration techniques to allow a set of GPS receivers to locate a false GPS signal source. Using simulations, we show that receivers sharing a local clock can locate nearby spoofing adversaries with sufficient confidence.
Keywords: clock synchronization, gps spoofing, power grids (ID#: 15-5872)
URL: http://doi.acm.org/10.1145/2627393.2627398
Ayan Banerjee, Sandeep K. S. Gupta. “Model Based Code Generation for Medical Cyber Physical Systems.” MMA '14 Proceedings of the 1st Workshop on Mobile Medical Applications, November 2014, Pages 22-27. doi:10.1145/2676431.2676646
Abstract: Deployment of medical devices on human body in unsupervised environment makes their operation safety critical. Software errors such as unbounded memory access or unreachable critical alarms can cause life threatening consequences in these medical cyber-physical systems (MCPSes), where software in medical devices monitor and control human physiology. Further, implementation of complex control strategy in inherently resource constrained medical devices require careful evaluation of runtime characteristics of the software. Such stringent requirements causes errors in manual implementation, which can be only detected by static analysis tools possibly inflicting high cost of redesigning. To avoid such inefficiencies this paper proposes an automatic code generator with assurance on safety from errors such as out-of-bound memory access, unreachable code, and race conditions. The proposed code generator was evaluated against manually written code of a software benchmark for sensors BSNBench in terms of possible optimizations using conditional X propagation. The generated code was found to be 9.3% more optimized than BSNBench code. The generated code was also tested using static analysis tool, Frama-c, and showed no errors.
Keywords: code synthesis, model based code generation, sensor networks, software errors, static analysis for sensors (ID#: 15-5873)
URL: http://doi.acm.org/10.1145/2676431.2676646
Sabine Theis, Thomas Alexander, Matthias Wille. “The Nexus of Human Factors in Cyber-Physical Systems: Ergonomics of Eyewear for Industrial Applications.” ISWC '14 Adjunct Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program, September 2014, Pages 217-220. doi:10.1145/2641248.2645639
Abstract: Smart eyewear devices may serve as advanced interfaces between cyber-physical systems (CPS) and workers by integrating digital information into the visual field. We have addressed ergonomic issues related to the use of a ruggedized head-mounted display (HMD) (Liteye 750A, see-through and look-around mode) and a conventional screen during a half-day day working shift (N=60). We only found minor physiological effects of the HMD, resulting into inflexible head posture, higher muscle activity over time of the left M. Splenius capitis and low performance given its look-around mode.
Keywords: cyber-physical systems (CPS), wearable computing (ID#: 15-5874)
URL: http://dl.acm.org/citation.cfm?id=2645639
Radha Poovendran. “Passivity Framework for Modeling, Mitigating, and Composing Attacks on Networked Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 29-30. doi:10.1145/2566468.2566470
Abstract: Cyber-physical systems (CPS) consist of a tight coupling between cyber (sensing and computation) and physical (actuation and control) components. As a result of this coupling, CPS are vulnerable to both known and emerging cyber attacks, which can degrade the safety, availability, and reliability of the system. A key step towards guaranteeing CPS operation in the presence of threats is developing quantitative models of attacks and their impact on the system and express them in the language of CPS. Traditionally, such models have been introduced within the framework of formal methods and verification. In this talk, we present a control-theoretic modeling framework. We demonstrate that the control-theoretic approach can capture the adaptive and time-varying strategic interaction between the adversary and the targeted system. Furthermore, control theory provides a common language in which to describe both the physical dynamics of the system, as well as the impact of the attack and defense. In particular, we provide a passivity-based approach for modeling and mitigating jamming and wormhole attacks. We demonstrate that passivity enables composition of multiple attack and defense mechanisms, allowing characterization of the overall performance of the system under attack. Our view is that the formal methods and the control-based approaches are complementary.
Keywords: cyber physical systems, network security, passivity (ID#: 15-5875)
URL: http://doi.acm.org/10.1145/2566468.2566470
Ye Li, Richard West, Eric Missimer. “A Virtualized Separation Kernel for Mixed Criticality Systems.” VEE '14 Proceedings of the 10th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, March 2014, Pages 201-212. doi:10.1145/2674025.2576206
Abstract: Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. In this paper, we discuss the design of the Quest-V separation kernel, which partitions services of different criticalities in separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. Moreover, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes.
Keywords: chip-level distributed system, separation kernel (ID#: 15-5876)
URL: http://doi.acm.org/10.1145/2674025.2576206
David Formby, Sang Shin Jung, John Copeland, Raheem Beyah. “An Empirical Study of TCP Vulnerabilities in Critical Power System Devices.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 39-44. doi:10.1145/2667190.2667196
Abstract: Implementations of the TCP/IP protocol suite have been patched for decades to reduce the threat of TCP sequence number prediction attacks. TCP, in particular, has been adopted to many devices in the power grid as a transport layer for their applications since it provides reliability. Even though this threat has been well-known for almost three decades, this does not hold true in power grid networks; weak TCP sequence number generation can still be found in many devices used throughout the power grid. Although our analysis only covers one substation, we believe that this is without loss of generality given: 1) the pervasiveness of the flaws throughout the substation devices; and 2) the prominence of the vendors. In this paper, we show how much TCP initial sequence numbers (ISNs) are still predictable and how time is strongly correlated with TCP ISN generation. We collected power grid network traffic from a live substation for six months, and we measured TCP ISN differences and their time differences between TCP connection establishments. In the live substation, we found three unique vendors (135 devices, 68%) from a total of eight vendors (196 devices) running TCP that show strongly predictable patterns of TCP ISN generation.
Keywords: dnp3, power grid, scada, tcp sequence number, tcp sequence prediction (ID#: 15-5877)
URL: http://doi.acm.org/10.1145/2667190.2667196
Gerold Hoelzl, Alois Ferscha, Peter Halbmayer, Welma Pereira. “Goal Oriented Smart Watches for Cyber Physical Superorganisms.” UbiComp '14 Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, September 2014, Pages 1071-1076. doi:10.1145/2638728.2659395
Abstract: We didn't start the fire. It was always burning since technology became integrated into wearable things that can be traced back to the early 1500s. This earliest forms of wearable technology were manifested as pocket watches. Of course technology changed and evolved, but again it might be the watch, now in form of a wrist worn smart watch, that could carve the way towards an always on, large scale, planet spanning, body sensor network. The challenge arises on how to handle this enormous scale of upcoming smart watches and the produced data. This work highlights a strategy on how to make use of the massive amount of smart watches in building goal oriented, dynamically evolving network structures that autonomously adapt to changes in the smart watch ecosystem like cells do in the human organism.
Keywords: (Not provided) (ID#: 15-5878)
URL: http://doi.acm.org/10.1145/2638728.2659395
Zhenqi Huang, Yu Wang, Sayan Mitra, Geir E. Dullerud. “On the Cost of Differential Privacy in Distributed Control Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 105-114. doi:10.1145/2566468.2566474
Abstract: Individuals sharing information can improve the cost or performance of a distributed control system. But, sharing may also violate privacy. We develop a general framework for studying the cost of differential privacy in systems where a collection of agents, with coupled dynamics, communicate for sensing their shared environment while pursuing individual preferences. First, we propose a communication strategy that relies on adding carefully chosen random noise to agent states and show that it preserves differential privacy. Of course, the higher the standard deviation of the noise, the higher the cost of privacy. For linear distributed control systems with quadratic cost functions, the standard deviation becomes independent of the number agents and it decays with the maximum eigenvalue of the dynamics matrix. Furthermore, for stable dynamics, the noise to be added is independent of the number of agents as well as the time horizon up to which privacy is desired. Finally, we show that the cost of ε-differential privacy up to time T, for a linear stable system with N agents, is upper bounded by O(T3/Nε2).
Keywords: cyber-physical security, differential privacy, distributed control (ID#: 15-5879)
URL: http://doi.acm.org/10.1145/2566468.2566474
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Differential Privacy, 2014 Part 1 |
The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. The work here looks at big data and cyber physical systems, as well as theoretic approaches. Citations are for articles published in 2014.
Xiaojing Liao; Formby, D.; Day, C.; Beyah, R.A., “Towards Secure Metering Data Analysis via Distributed Differential Privacy,” Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, vol., no., pp. 780, 785, 23-26 June 2014. doi:10.1109/DSN.2014.82
Abstract: The future electrical grid, i.e., smart grid, will utilize appliance-level control to provide sustainable power usage and flexible energy utilization. However, load trace monitoring for appliance-level control poses privacy concerns with inferring private information. In this paper, we introduce a privacy-preserving and fine-grained power load data analysis mechanism for appliance-level peak-time load balance control in the smart grid. The proposed technique provides rigorous provable privacy and an accuracy guarantee based on distributed differential privacy. We simulate the scheme as privacy modules in the smart meter and the concentrator, and evaluate its performance under a real-world power usage dataset, which validates the efficiency and accuracy of the proposed scheme.
Keywords: data analysis; data privacy; domestic appliances; load (electric); power engineering computing; smart meters; smart power grids; appliance-level control; appliance-level peak-time load balance control; concentrator; distributed differential privacy; electrical grid; fine-grained power load data analysis mechanism; flexible energy utilization; load trace monitoring; metering data analysis; performance evaluation; privacy-preserving load data analysis mechanism; smart grid; smart meter; sustainable power usage; Accuracy; Home appliances; Noise; Power demand; Privacy; Smart grids; Smart meters (ID#: 15-5909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903641&isnumber=6903544
Ren Hongde; Wang Shuo; Li Hui, “Differential Privacy Data Aggregation Optimizing Method and Application to Data Visualization,” Electronics, Computer and Applications, 2014 IEEE Workshop on, vol, no., pp. 54, 58, 8-9 May 2014. doi:10.1109/IWECA.2014.6845555
Abstract: This article explores the challenges in data privacy within the big data era with specific focus on differential privacy of social media data and its geospatial realization within a Cloud-based research environment. By using differential privacy method, this paper achieves the distortion of the data by adding noise to protect data privacy. Furthermore, this article presents the IDP k-means Aggregation Optimizing Method to decrease the overlap and superposition of massive data visualization. Finally this paper combines IDP k-means Aggregation Optimizing Method with differential privacy method to protect data privacy. The outcome of this research is a set of underpinning formal models of differential privacy that reflect the geospatial tools challenges faced with location-based information, and the implementation of a suite of Cloud-based tools illustrating how these tools support an extensive range of data privacy demands.
Keywords: Big Data; cloud computing; data privacy; data visualisation; IDP k-means aggregation optimizing method; cloud-based research environment; differential privacy data aggregation; differential privacy method; formal models; geospatial realization; geospatial tools; location-based information; social media data; Algorithm design and analysis; Visualization; Data Visualization; aggregation optimizing; differential privacy; massive data (ID#: 15-5910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845555&isnumber=6845536
Barthe, G.; Gaboardi, M.; Gallego Arias, E.J.; Hsu, J.; Kunz, C.; Strub, P.-Y., “Proving Differential Privacy in Hoare Logic,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 411, 424, 19-22 July 2014. doi:10.1109/CSF.2014.36
Abstract: Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.
Keywords: data privacy; formal logic; Hoare logic; Hoare specification; differential privacy literature; many 2-safety properties; nonprobabilistic programs; nonrelational reasoning; privacy-preserving computation; quantitative 2-safety property; verification techniques; worst-case notion; Data privacy; Databases; Privacy; Probabilistic logic; Safety; Standards; Synchronization; differential privacy; hoare logic; privacy; probabilistic hoare logic; relational hoare logic; verification (ID#: 15-5911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957126&isnumber=6957090
Yilin Shen; Hongxia Jin, “Privacy-Preserving Personalized Recommendation: An Instance-Based Approach via Differential Privacy,” Data Mining (ICDM), 2014 IEEE International Conference on, vol., on., pp. 540, 549, 14-17 Dec. 2014. doi:10.1109/ICDM.2014.140
Abstract: Recommender systems become increasingly popular and widely applied nowadays. The release of users' private data is required to provide users accurate recommendations, yet this has been shown to put users at risk. Unfortunately, existing privacy-preserving methods are either developed under trusted server settings with impractical private recommender systems or lack of strong privacy guarantees. In this paper, we develop the first lightweight and provably private solution for personalized recommendation, under untrusted server settings. In this novel setting, users' private data is obfuscated before leaving their private devices, giving users greater control on their data and service providers less responsibility on privacy protections. More importantly, our approach enables the existing recommender systems (with no changes needed) to directly use perturbed data, rendering our solution very desirable in practice. We develop our data perturbation approach on differential privacy, the state-of-the-art privacy model with lightweight computation and strong but provable privacy guarantees. In order to achieve useful and feasible perturbations, we first design a novel relaxed admissible mechanism enabling the injection of flexible instance-based noises. Using this novel mechanism, our data perturbation approach, incorporating the noise calibration and learning techniques, obtains perturbed user data with both theoretical privacy and utility guarantees. Our empirical evaluation on large-scale real-world datasets not only shows its high recommendation accuracy but also illustrates the negligible computational overhead on both personal computers and smart phones. As such, we are able to meet two contradictory goals, privacy preservation and recommendation accuracy. This practical technology helps to gain user adoption with strong privacy protection and benefit companies with high-quality personalized services on perturbed user data.
Keywords: calibration; data privacy; personal computing; recommender systems; trusted computing; computational overhead; data perturbation; differential privacy; high quality personalized services; noise calibration; perturbed user data; privacy preservation; privacy protections; privacy-preserving methods; privacy-preserving personalized recommendation; private recommender systems; provable privacy guarantees; recommendation accuracy; smart phones; strong privacy protection; theoretical privacy; untrusted server settings; user adoption; user private data; utility guarantees; Aggregates; Data privacy; Noise; Privacy; Sensitivity; Servers; Vectors; Data Perturbation; Differential Privacy; Learning and Optimization; Probabilistic Analysis; Recommender System (ID#: 15-5912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023371&isnumber=7023305
Jing Zhao; Taeho Jung; Yu Wang; Xiangyang Li, “Achieving Differential Privacy of Data Disclosure in the Smart Grid,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 504, 512, April 27 2014 - May 2 2014. doi:10.1109/INFOCOM.2014.6847974
Abstract: The smart grid introduces new privacy implications to individuals and their family due to the fine-grained usage data collection. For example, smart metering data could reveal highly accurate real-time home appliance energy load, which may be used to infer the human activities inside the houses. One effective way to hide actual appliance loads from the outsiders is Battery-based Load Hiding (BLH), in which a battery is installed for each household and smartly controlled to store and supply power to the appliances. Even though such technique has been demonstrated useful and can prevent certain types of attacks, none of existing BLH works can provide probably privacy-preserving mechanisms. In this paper, we investigate the privacy of smart meters via differential privacy. We first analyze the current existing BLH methods and show that they cannot guarantee differential privacy in the BLH problem. We then propose a novel randomized BLH algorithm which successfully assures differential privacy, and further propose the Multitasking-BLH-Exp3 algorithm which adaptively updates the BLH algorithm based on the context and the constraints. Results from extensive simulations show the efficiency and effectiveness of the proposed method over existing BLH methods.
Keywords: data acquisition; domestic appliances; smart meters; smart power grids; BLH methods; battery-based load hiding; data disclosure; fine-grained usage data collection; multitasking-BLH-Exp3 algorithm; privacy-preserving mechanisms; real-time home appliance energy load; smart grid; smart metering data; smart meters via differential privacy; Batteries; Data privacy; Energy consumption; Home appliances; Noise; Privacy; Smart meters; Data Disclosure; Differential Privacy; Smart Grid; Smart Meter (ID#: 15-5913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847974&isnumber=6847911
Hsu, J.; Gaboardi, M.; Haeberlen, A.; Khanna, S.; Narayan, A.; Pierce, B.C.; Roth, A., “Differential Privacy: An Economic Method for Choosing Epsilon,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 398, 410, 19-22 July 2014. doi:10.1109/CSF.2014.35
Abstract: Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something about the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
Keywords: data analysis; data privacy; Epsilon; data analyst; differential privacy; differentially private algorithms; economic method; privacy guarantee; Accuracy; Analytical models; Cost function; Data models; Data privacy; Databases; Privacy; Differential Privacy (ID#: 15-5914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957125&isnumber=6957090
Weina Wang; Lei Ying; Junshan Zhang, “On the Relation Between Identifiability, Differential Privacy, and Mutual-Information Privacy,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 1086, 1092, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028576
Abstract: This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a privacy-distortion framework, where the distortion is defined to be the expected Hamming distance between the input and output databases, we establish some fundamental connections between these three privacy notions. Given a maximum distortion D, let ε*i(D) denote the smallest (best) identifiability level, and ε*d(D) the smallest differential privacy level. Then we characterize ε*i(D) and ε*d(D), and prove that ε*i(D) - εx ≤ ε*d(D) ≤ ε*i(D) for D in some range, where εx is a constant depending on the distribution of the original database X, and diminishes to zero when the distribution of X is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given a maximum distortion D in some range, there is a mechanism that optimizes the identifiability level and also achieves the best mutual-information privacy.
Keywords: data privacy; database management systems; Hamming distance; differential privacy level; identifiability level; input databases; maximum distortion; mutual-information privacy; output databases; privacy-distortion framework; Data analysis; Data privacy; Databases; Mutual information; Privacy; Random variables (ID#: 15-5915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028576&isnumber=7028426
Shrivastva, K.M.P.; Rizvi, M.A.; Singh, S., “Big Data Privacy Based on Differential Privacy a Hope for Big Data,” Computational Intelligence and Communication Networks (CICN), 2014 International Conference on, vol., no., pp. 776,781, 14-16 Nov. 2014. doi:10.1109/CICN.2014.167
Abstract: In era of information age, due to different electronic, information & communication technology devices and process like sensors, cloud, individual archives, social networks, internet activities and enterprise data are growing exponentially. The most challenging issues are how to effectively manage these large and different type of data. Big data is one of the term named for this large and different type of data. Due to its extraordinary scale, privacy and security is one of the critical challenge of big data. At the every stage of managing the big data there are chances that privacy may be disclose. Many techniques have been suggested and implemented for privacy preservation of large data set like anonymization based, encryption based and others but unfortunately due to different characteristic (large volume, high speed, and unstructured data) of big data all these techniques are not fully suitable. In this paper we have deeply analyzed, discussed and suggested how an existing approach “differential privacy” is suitable for big data. Initially we have discussed about differential privacy and later analyze how it is suitable for big data.
Keywords: Big Data; cryptography; data privacy; anonymization based data set; big data privacy; big data security; differential privacy; electronic devices; encryption based data set; information age; information and communication technology devices; privacy preservation; Big data; Data privacy; Databases; Encryption; Noise; Privacy; Anonymization; Big data privacy; Differential privacy; Privacy approaches (ID#: 15-5916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7065587&isnumber=7065338
Quan Geng; Viswanath, P., “The Optimal Mechanism in Differential Privacy,” Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2371, 2375, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875258
Abstract: Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this work we study the fundamental tradeoff between privacy and utility in differential privacy. We derive the optimal ε-differentially private mechanism for single real-valued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions, which can be viewed as a geometric mixture of uniform probability distributions. In the context of ℓ1 and ℓ2 utility functions, we show that the standard Laplacian mechanism, which has been widely used in the literature, is asymptotically optimal in the high privacy regime, while in the low privacy regime, the staircase mechanism performs exponentially better than the Laplacian mechanism. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.
Keywords: Laplace equations; minimisation; statistical databases; statistical distributions; ℓ1 utility functions; ℓ2 utility functions; Laplacian mechanism; aggregate information; cost-minimization framework; differential privacy; geometric mixture; high privacy regime; low privacy regime; noise probability distributions; optimal ε-differentially private mechanism; real-valued query function; staircase-shaped probability density functions; statistical database; uniform probability distributions; utility-maximization framework; Data privacy; Databases; Laplace equations; Noise; Privacy; Probability density function; Probability distribution (ID#: 15-5917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875258&isnumber=6874773
Zuxing Li; Oechtering, T.J., “Differential Privacy in Parallel Distributed Bayesian Detections,” Information Fusion (FUSION), 2014 17th International Conference on, vol., no., pp. 1, 7, 7-10 July 2014. doi:(not provided)
Abstract: In this paper, the differential privacy problem in parallel distributed detections is studied in the Bayesian formulation. The privacy risk is evaluated by the minimum detection cost for the fusion node to infer the private random phenomenon. Different from the privacy-unconstrained distributed Bayesian detection problem, the optimal operation point of a remote decision maker can be on the boundary of the privacy-unconstrained operation region or in the intersection of privacy constraint hyperplanes. Therefore, for a remote decision maker in the optimal privacy-constrained distributed detection design, it is sufficient to consider a deterministic linear likelihood combination test or a randomized decision strategy of two linear likelihood combination tests which achieves the optimal operation point in each case. Such an insight indicates that the existing algorithm can be reused by incorporating the privacy constraint. The trade-off between detection and privacy metrics will be illustrated in a numerical example.
Keywords: Bayes methods; data privacy; decision making; deterministic algorithms; parallel algorithms; random processes; Bayesian formulation; deterministic linear likelihood combination test; differential privacy problem; fusion node; minimum detection cost; optimal privacy-constrained distributed detection design; parallel distributed detections; privacy constraint hyperplanes; privacy risk; privacy-unconstrained distributed Bayesian detection problem; privacy-unconstrained operation region; private random phenomenon; randomized decision strategy; remote decision maker; Data privacy; Integrated circuits; Measurement; Optimization; Phase frequency detector; Privacy (ID#: 15-5918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916169&isnumber=6915967
Yu Wang; Zhenqi Huang; Mitra, S.; Dullerud, G.E., “Entropy-Minimizing Mechanism for Differential Privacy of Discrete-Time Linear Feedback Systems,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2130, 2135, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039713
Abstract: The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to metric spaces to study a mechanism that randomizes a deterministic query by adding mean-zero noise to keep differential privacy. For one-shot queries, we show that ∈-differential privacy of an n-dimensional input implies a lower bound n - n ln(∈/2) on the entropy of the randomized output, and this lower bound is achieved by adding Laplacian noise. We then consider the ∈-differential privacy of a discrete-time linear feedback system in which noise is added to the system output at each time. The adversary estimates the system states from the output history. We show that, to keep the system ∈-differentially private, the output entropy is bounded below, and this lower bound is achieves by an explicit mechanism.
Keywords: discrete time systems; feedback; linear systems; ∈-differential privacy; Laplacian noise; deterministic query; discrete-time linear feedback systems; entropy-minimizing mechanism; mean-zero noise; metric space; n-dimensional input; one-shot query; private query; randomized output; system output; system states; Entropy; History; Measurement; Noise; Privacy; Probability distribution; Random variables (ID#: 15-5919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039713&isnumber=7039338
Shang Shang; Wang, T.; Cuff, P.; Kulkarni, S., “The Application of Differential Privacy for Rank Aggregation: Privacy and Accuracy,” Information Fusion (FUSION), 2014 17th International Conference on, vol, no., pp. 1, 7, 7-10 July 2014. doi:(not provided)
Abstract: The potential risk of privacy leakage prevents users from sharing their honest opinions on social platforms. This paper addresses the problem of privacy preservation if the query returns the histogram of rankings. The framework of differential privacy is applied to rank aggregation. The error probability of the aggregated ranking is analyzed as a result of noise added in order to achieve differential privacy. Upper bounds on the error rates for any positional ranking rule are derived under the assumption that profiles are uniformly distributed. Simulation results are provided to validate the probabilistic analysis.
Keywords: data privacy; probability; social networking (online); differential privacy; error probability; honest opinions; positional ranking rule; privacy leakage; privacy preservation; probabilistic analysis; rank aggregation; ranking histogram; social platforms; Algorithm design and analysis; Error analysis; Histograms; Noise; Privacy; Upper bound; Vectors; Accuracy; Privacy; Rank Aggregation (ID#: 15-5920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916096&isnumber=6915967
Sarwate, A.D.; Sankar, L., “A Rate-Distortion Perspective on Local Differential Privacy,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol, no., pp. 903, 908, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028550
Abstract: Local differential privacy is a model for privacy in which an untrusted statistician collects data from individuals who mask their data before revealing it. While randomized response has shown to be a good strategy when the statistician's goal is to estimate a parameter of the population, we consider instead the problem of locally private data publishing, in which the data collector must publish a version of the data it has collected. We model utility by a distortion measure and consider privacy mechanisms that act via a memoryless channel operating on the data. If we consider a the source distribution to be unknown but in a class of distributions, we arrive at a robust-rate distortion model for the privacy-distortion tradeoff. We show that under Hamming distortions, the differential privacy risk is lower bounded for all nontrivial distortions, and that the lower bound grows logarithmically in the alphabet size.
Keywords: data privacy; statistical analysis; Hamming distortion; local differential privacy risk; locally private data publishing; memoryless channnel; privacy mechanism; privacy-distortion tradeoff; rate-distortion; Data models; Data privacy; Databases; Distortion measurement; Mutual information; Privacy; Rate-distortion (ID#: 15-5921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028550&isnumber=7028426
Qihong Yu; Ruonan Rao, “An Improved Approach of Data Integration Based on Differential Privacy,” Progress in Informatics and Computing (PIC), 2014 International Conference on, vol., no., pp. 395, 399, 16-18 May 2014. doi:10.1109/PIC.2014.6972364
Abstract: Multiset operation and data transmission are the key operations for privacy preserving data integration because they involve the interaction of participants. This paper proposes an approach which contains anonymous multiset operation and distributed noise generation based on the existing researches and we apply it in data integration. Analysis shows that the improved approach provides security for data integration and has lower overhead than the existing researches.
Keywords: data integration; data privacy; anonymous multiset operation; data integration approach; data transmission; differential privacy; distributed noise generation; privacy preserving; Data integration; Data privacy; Data warehouses; Distributed databases; Encryption; Noise; data integration; multiset operation; noise generation (ID#: 15-5922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972364&isnumber=6972283
Niknami, N.; Abadi, M.; Deldar, F., “SpatialPDP: A Personalized Differentially Private Mechanism for Range Counting Queries over Spatial Databases,” Computer and Knowledge Engineering (ICCKE), 2014 4th International eConference on, vol, no., pp. 709, 715, 29-30 Oct. 2014. doi:10.1109/ICCKE.2014.6993414
Abstract: Spatial databases are rapidly growing due to the large amount of geometric data obtained from geographic information systems, geomarketing, traffic control, and so on. Range counting queries are among the most common queries over spatial databases. They allow us to describe a region in a geometric space and then retrieve some statistics about geometric objects falling within it. Quadtree-based spatial indices are usually used by spatial databases to speed up range counting queries. Privacy protection is a major concern when answering these queries. The reason is that an adversary observing changes in query answers could induce the presence or absence of a particular geometric object in a spatial database. Differential privacy addresses this problem by guaranteeing that the presence or absence of a geometric object has little effect on the query answers. However, the existing differentially private algorithms for spatial databases ignore the fact that different subregions of a geometric space may require different amounts of privacy protection. This causes that the same privacy budget is considered for different subregions, resulting in a significant increase in error measure for subregions with low privacy protection requirements or a major reduction in privacy measure for subregions with high privacy protection requirements. In this paper, we address these shortcomings by presenting SpatialPDP, a personalized differentially private mechanism for range counting queries over spatial databases. It uses a so-called personalized geometric budgeting strategy to allocate different privacy budgets to subregions with different privacy protection requirements. Our experimental results show that SpatialPDP can achieve a reasonable trade-off between error measure and differential privacy, in accordance with the privacy requirements of different subregions.
Keywords: data privacy; quadtrees; question answering (information retrieval); visual databases; SpatialPDP; differential privacy; error measure; geographic information system; geomarketing; geometric data; geometric objects; personalized differentially private mechanism; personalized geometric budgeting strategy; privacy budget; privacy protection requirement; private algorithms; quadtree-based spatial indices; query answers; range counting query; spatial databases; traffic control; Data privacy; Measurement uncertainty; Noise; Noise measurement; Privacy; Spatial databases; personalized geometric budgeting; personalized privacy; spatial database (ID#: 15-5923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993414&isnumber=6993332
Hill, R.; Hansen, M.; Janssen, E.; Sanders, S.A.; Heiman, J.R.; Li Xiong, “A Quantitative Approach for Evaluating the Utility of a Differentially Private Behavioral Science Dataset,” Healthcare Informatics (ICHI), 2014 IEEE International Conference on, vol., no., pp. 276, 284, 15-17 Sept. 2014. doi:10.1109/ICHI.2014.45
Abstract: Social scientists who collect large amounts of medical data value the privacy of their survey participants. As they follow participants through longitudinal studies, they develop unique profiles of these individuals. A growing challenge for these researchers is to maintain the privacy of their study participants, while sharing their data to facilitate research. Differential privacy is a new mechanism which promises improved privacy guarantees for statistical databases. We evaluate the utility of a differentially private dataset. Our results align with the theory of differential privacy and show when the number of records in the database is sufficiently larger than the number of cells covered by a database query, the number of statistical tests with results close to those performed on original data increases.
Keywords: data privacy; medical information systems; statistical analysis; database query; differential privacy; medical data; private behavioral science dataset; statistical database; statistical test; Data privacy; Databases; Histograms;Logistics; Noise; Privacy; Sensitivity; Behavioral Science; Data Privacy; Differential Privacy
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7052500&isnumber=7052453
Le Ny, J.; Touati, A.; Pappas, G.J., “Real-Time Privacy-Preserving Model-Based Estimation of Traffic Flows,” Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on, vol, no., pp. 92, 102, 14-17 April 2014. doi:10.1109/ICCPS.2014.6843714
Abstract: Road traffic information systems rely on data streams provided by various sensors, e.g., loop detectors, cameras, or GPS, containing potentially sensitive location information about private users. This paper presents an approach to enhance real-time traffic state estimators using fixed sensors with a privacy-preserving scheme providing formal guarantees to the individuals traveling on the road network. Namely, our system implements differential privacy, a strong notion of privacy that protects users against adversaries with arbitrary side information. In contrast to previous privacy-preserving schemes for trajectory data and location-based services, our procedure relies heavily on a macroscopic hydrodynamic model of the aggregated traffic in order to limit the impact on estimation performance of the privacy-preserving mechanism. The practicality of the approach is illustrated with a differentially private reconstruction of a day of traffic on a section of I-880 North in California from raw single-loop detector data.
Keywords: data privacy; real-time systems; road traffic; state estimation; traffic information systems; data streams; real-time privacy-preserving model real-time traffic state estimators; road network; road traffic information systems; traffic flow estimation; Data privacy; Density measurement; Detectors; Privacy; Roads; Vehicles; Differential privacy; intelligent transportation systems; privacy-preserving data assimilation (ID#: 15-5924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843714&isnumber=6843703
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Differential Privacy, 2014 Part 2 |
The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. The work here looks at big data and cyber physical systems, as well as theoretic approaches. Citations are for articles published in 2014.
Anjum, Adeel; Anjum, Adnan, “Differentially Private K-Anonymity,” Frontiers of Information Technology (FIT), 2014 12th International Conference on, vol. no., pp. 153, 158, 17-19 Dec. 2014. doi:10.1109/FIT.2014.37
Abstract: Research in privacy preserving data publication can be broadly categorized in two classes. Syntactic privacy definitions have been under the cursor of the research community for the past many years. A lot of research is primarily dedicated to developing algorithms and notions for syntactic privacy that thwart the re-identification attacks. Sweeney and Samarati proposed a well-known syntactic privacy definition coined K-anonymity for thwarting linking attacks using quasi-identifiers. Thanks to its conceptual simplicity, K-anonymity has been widely implemented as a practicable definition of syntactic privacy, and owing to algorithmic advancement for K-anonymous versions of micro-data, K-anonymity has attained much anticipated popularity. Semantic privacy definitions do not take into account the adversarial background knowledge but rather forces the sanitization algorithms (mechanisms) to satisfy a strong semantic property by the way of random processes. Though semantic privacy definitions are theoretically immune to any kind of adversarial attacks, their applicability in real-life scenarios has come under criticism. In order to make the semantic definitions more practical, the research community has focused its attention towards combining the practicalness of syntactic privacy with the strength of semantic approaches [7] such that we may in the near future benefit from both research tracks.
Keywords: Data models; Data privacy; Noise measurement; Partitioning algorithms; Privacy; Semantics; Syntactics; Data Privacy; Differential Privacy; K-anonymity; Semantic Privacy; Syntactic Privacy (ID#: 15-6083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118391&isnumber=7118353
Zhigang Zhou; Hongli Zhang; Qiang Zhang; Yang Xu; Panpan Li, “Privacy-Preserving Granular Data Retrieval Indexes for Outsourced Cloud Data,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 601, 606, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036873
Abstract: Storage as a service has become an important paradigm in cloud computing for its great flexibility and economic savings. Since data owners no longer physically possess the storage of their data, it also brings many new challenges for data security and management. Several techniques have been investigated, including encryption, as well as fine-grained access control for enabling such services. However, these techniques just expresses the "Yes or No" problem, that is, whether the user has permissions to access the corresponding data. In this paper, we investigate the issue of how to provide different granular information views for different users. Our mechanism first constructs the relationship between the keywords and data files based on a Galois connection. And then we exploit data retrieval indexes with variable threshold, where granular data retrieval service can be supported by adjusting the threshold for different users. Moreover, to prevent privacy disclosure, we propose a differentially privacy release scheme based on the proposed index technique. We prove the privacy-preserving guarantee of the proposed mechanism, and the extensive experiments further demonstrate the validity of the proposed mechanism.
Keywords: cloud computing; data privacy; granular computing; information retrieval; outsourcing; Galois connection; access permissions; data files; data management; data owners; data security; differentially privacy release scheme; granular data retrieval service; granular information; outsourced cloud data; privacy disclosure prevention; privacy-preserving granular data retrieval indexes; privacy-preserving guarantee; storage-as-a-service; variable threshold; Access control; Cloud computing; Data privacy; Indexes; Lattices; Privacy; cloud computing; data indexes; differential privacy; fuzzy retrieval; granular data retrieval (ID#: 15-6084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036873&isnumber=7036769
Saravanan, M.; Thoufeeq, A.M.; Akshaya, S.; Jayasre Manchari, V.L., “Exploring New Privacy Approaches in a Scalable Classification Framework,” Data Science and Advanced Analytics (DSAA), 2014 International Conference on, vol., no., pp. 209, 215, Oct. 30 2014 - Nov. 1 2014. doi:10.1109/DSAA.2014.7058075
Abstract: Recent advancements in Information and Communication Technologies (ICT) enable many organizations to collect, store and control massive amount of various types of details of individuals from their regular transactions (credit card, mobile phone, smart meter etc.). While using these wealth of information for Personalized Recommendations provides enormous opportunities for applying data mining (or machine learning) tasks, there is a need to address the challenge of preserving individuals privacy during the time of running predictive analytics on Big Data. Privacy Preserving Data Mining (PPDM) on these applications is particularly challenging, because it involves and process large volume of complex, heterogeneous, and dynamic details of individuals. Ensuring that privacy-protected data remains useful in intended applications, such as building accurate data mining models or enabling complex analytic tasks, is essential. Differential Privacy has been tried with few of the PPDM methods and is immune to attacks with auxiliary information. In this paper, we propose a distributed implementation based on Map Reduce computing model for C4.5 Decision Tree algorithm and run extensive experiments on three different datasets using Hadoop Cluster. The novelty of this work is to experiment two different privacy methods: First method is to use perturbed data on decision tree algorithm for prediction in privacy-preserving data sharing and the second method is based on applying raw data to the privacy-preserving decision tree algorithm for private data analysis. In addition to this, we propose the combination of the methods as hybrid technique to maintain accuracy (Utility) and privacy in an acceptable level. The proposed privacy approaches has two potential benefits in the context of data mining tasks: it allows the service providers to outsource data mining tasks without exposing the raw data, and it allows data providers to share data access to third parties while limiting privacy risks.
Keywords: data mining; data privacy; decision trees; learning (artificial intelligence); C4.5 decision tree algorithm; Hadoop Cluster; ICT; big data; differential privacy; information and communication technologies; machine learning; map reduce computing model; personalized recommendation; privacy preserving data mining; private data analysis; scalable classification; Big data; Classification algorithms; Data privacy; Decision trees; Noise; Privacy; Scalability; Hybrid data privacy; Map Reduce Framework; Privacy Approaches; Privacy Preserving data Mining; Scalability (ID#: 15-6085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058075&isnumber=7058031
Paverd, A.; Martin, A.; Brown, I., “Privacy-Enhanced Bi-Directional Communication in the Smart Grid Using Trusted Computing,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 872, 877, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trusted computing; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-6086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609
Jun Yang; Yun Li, “Differentially Private Feature Selection,” Neural Networks (IJCNN), 2014 International Joint Conference on, vol., no., pp. 4182, 4189, 6-11 July 2014. doi:10.1109/IJCNN.2014.6889613
Abstract: The privacy-preserving data analysis has been gained significant interest across several research communities. The current researches mainly focus on privacy-preserving classification and regression. However, feature selection is also an essential component for data analysis, which can be used to reduce the data dimensionality and can be utilized to discover knowledge, such as inherent variables in data. In this paper, in order to efficiently mine sensitive data, a privacy preserving feature selection algorithm is proposed and analyzed in theory based on local learning and differential privacy. We also conduct some experiments on benchmark data sets. The Experimental results show that our algorithm can preserve the data privacy to some extent.
Keywords: data analysis; data mining; data privacy; learning (artificial intelligence); data dimensionality reduction; differential privacy; differentially private feature selection; feature selection; knowledge discovery; local learning; privacy preserving feature selection algorithm; privacy-preserving classification; privacy-preserving data analysis; privacy-preserving regression; Accuracy; Algorithm design and analysis; Computational modeling; Data privacy; Logistics; Privacy; Vectors (ID#: 15-6087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889613&isnumber=6889358
Koufogiannis, F.; Shuo Han; Pappas, G.J., “Computation of Privacy-Preserving Prices in Smart Grids,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2142, 2147, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039715
Abstract: Demand management through pricing is a modern approach that can improve the efficiency of modern power networks. However, computing optimal prices requires access to data that individuals consider private. We present a novel approach for computing prices while providing privacy guarantees under the differential privacy framework. Differentially private prices are computed through a distributed utility maximization problem with each individual perturbing their own utility function. Privacy concerning temporal localization and monitoring of an individual's activity is enforced in the process. The proposed scheme provides formal privacy guarantees and its performance-privacy trade-off is evaluated quantitatively.
Keywords: power system control; pricing; smart power grids; computation; demand management; differential privacy framework; distributed utility maximization problem; formal privacy; modern power networks; performance-privacy trade-off; pricing; privacy-preserving prices; smart grids; temporal localization; utility function; Electricity; Monitoring; Optimization; Power demand; Pricing; Privacy; Smart grids (ID#: 15-6088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039715&isnumber=7039338
Wentian Lu; Miklau, G.; Gupta, V., “Generating Private Synthetic Databases for Untrusted System Evaluation,” Data Engineering (ICDE), 2014 IEEE 30th International Conference on, vol., no., pp. 652, 663, March 31 2014 - April 4 2014. doi:10.1109/ICDE.2014.6816689
Abstract: Evaluating the performance of database systems is crucial when database vendors or researchers are developing new technologies. But such evaluation tasks rely heavily on actual data and query workloads that are often unavailable to researchers due to privacy restrictions. To overcome this barrier, we propose a framework for the release of a synthetic database which accurately models selected performance properties of the original database. We improve on prior work on synthetic database generation by providing a formal, rigorous guarantee of privacy. Accuracy is achieved by generating synthetic data using a carefully selected set of statistical properties of the original data which balance privacy loss with relevance to the given query workload. An important contribution of our framework is an extension of standard differential privacy to multiple tables.
Keywords: data privacy; database management systems; statistical analysis; trusted computing; balance privacy loss; database researchers; database vendors; differential privacy; privacy guarantee; privacy restrictions; private synthetic database generation; query workloads; statistical properties; synthetic data generation; untrusted system evaluation; Aggregates; Data privacy; Databases; Noise; Privacy; Sensitivity; Standards (ID#: 15-6089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816689&isnumber=6816620
Riboni, D.; Bettini, C., “Differentially-Private Release of Check-in Data for Venue Recommendation,” Pervasive Computing and Communications (PerCom), 2014 IEEE International Conference on, vol., no., pp.190,198, 24-28 March 2014. doi:10.1109/PerCom.2014.6813960
Abstract: Recommender systems suggesting venues offer very useful services to people on the move and a great business opportunity for advertisers. These systems suggest venues by matching the current context of the user with the venue features, and consider the popularity of venues, based on the number of visits (“check-ins”) that they received. Check-ins may be explicitly communicated by users to geo-social networks, or implicitly derived by analysing location data collected by mobile services. In general, the visibility of explicit check-ins is limited to friends in the social network, while the visibility of implicit check-ins limited to the service provider. Exposing check-ins to unauthorized users is a privacy threat since recurring presence in given locations may reveal political opinions, religious beliefs, or sexual orientation, as well as absence from other locations where the user is supposed to be. Hence, on one side mobile app providers host valuable information that recommender system providers would like to buy and use to improve their systems, and on the other we recognize serious privacy issues in releasing that information. In this paper, we solve this dilemma by providing formal privacy guarantees to users and trusted mobile providers while preserving the utility of check-in information for recommendation purposes. Our technique is based on the use of differential privacy methods integrated with a pre-filtering process, and protects against both an untrusted recommender system and its users, willing to infer the venues and sensitive locations visited by other users. Extensive experiments with a large dataset of real users' check-ins show the effectiveness of our methods.
Keywords: data privacy; mobile computing; recommender systems; social networking (online); advertisers; business opportunity; check-in data; differential privacy methods; differentially-private release; explicit check-ins; formal privacy; geo-social networks; implicit check-ins; location data analysis; mobile app providers; mobile services; political opinions; prefiltering process; religious beliefs; sexual orientation; untrusted recommender system; venue recommendation; Context; Data privacy; Mobile communication; Pervasive computing; Privacy; Recommender systems; Sensitivity (ID#: 15-6090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813960&isnumber=6813930
Patil, A.; Singh, S., “Differential Private Random Forest,” Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, vol., no., pp. 2623, 2630, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968348
Abstract: Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.
Keywords: data mining; data privacy; learning (artificial intelligence); Gini index; data mining algorithm; differential privacy; differential private random forest; information gain; max operator; personal information; private information; sensitive information; Accuracy; Data privacy; Indexes; Noise; Privacy; Sensitivity; Vegetation (ID#: 15-6091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968348&isnumber=6968191
Bassily, R.; Smith, A.; Thakurta, A., “Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds,” Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, vol., no., pp. 464, 473, 18-21 Oct. 2014. doi:10.1109/FOCS.2014.56
Abstract: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0) and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.
Keywords: computational complexity; convex programming; learning (artificial intelligence); minimisation; (ε, δ)-differential privacy; (ε, 0)-differential privacy; Lipschitz loss function; arbitrary loss function smoothing; machine learning; optimal nonprivate running time; oracle complexity; polynomial time; private convex empirical risk minimization; smooth function families; statistics; Algorithm design and analysis; Convex functions; Noise measurement; Optimization; Privacy; Risk management; Support vector machines (ID#: 15-6092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979031&isnumber=6978973
Le Ny, J.; Mohammady, M., “Differentially Private MIMO Filtering for Event Streams and Spatio-Temporal Monitoring,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2148, 2153, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039716
Abstract: Many large-scale systems such as intelligent transportation systems, smart grids or smart buildings collect data about the activities of their users to optimize their operations. In a typical scenario, signals originate from many sensors capturing events involving these users, and several statistics of interest need to be continuously published in real-time. Moreover, in order to encourage user participation, privacy issues need to be taken into consideration. This paper considers the problem of providing differential privacy guarantees for such multi-input multi-output systems operating continuously. We show in particular how to construct various extensions of the zero-forcing equalization mechanism, which we previously proposed for single-input single-output systems. We also describe an application to privately monitoring and forecasting occupancy in a building equipped with a dense network of motion detection sensors, which is useful for example to control its HVAC system.
Keywords: MIMO systems; filtering theory; sensors; HVAC system; differential privacy; differentially private MIMO filtering; event streams; intelligent transportation systems; large-scale systems; motion detection sensors; single-input single-output systems; smart buildings; smart grids; spatio temporal monitoring; zero-forcing equalization mechanism; Buildings; MIMO; Monitoring; Noise; Privacy; Sensitivity; Sensors (ID#: 15-6093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039716&isnumber=7039338
Chunchun Wu; Zuying Wei; Fan Wu; Guihai Chen, “DIARY: A Differentially Private and Approximately Revenue Maximizing Auction Mechanism for Secondary Spectrum Markets,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 625, 630, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036877
Abstract: It is urgent to solve the contradiction between limited spectrum resources and the increasing demand from the ever-growing wireless networks. Spectrum redistribution is a powerful way to mitigate the situation of spectrum scarcity. In contrast to existing truthful mechanisms for spectrum redistribution which aim to maximize the spectrum utilization and social welfare, we propose DIARY in this paper, which not only achieves approximate revenue maximization, but also guarantees bid privacy via differential privacy. Extensive simulations show that DIARY has substantial competitive advantages over existing mechanisms.
Keywords: data privacy; electronic commerce; radio networks; radio spectrum management; telecommunication industry; DIARY; approximately revenue maximization auction mechanism; differential privacy; differentially private mechanism; ever-growing wireless network; limited spectrum resource; secondary spectrum market; social welfare; spectrum redistribution; spectrum scarcity; spectrum utilization maximization; Cost accounting; Information systems; Interference; Privacy; Resource management; Security; Vectors (ID#: 15-6094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036877&isnumber=7036769
Tiwari, P.K.; Chaturvedi, S., “Publishing Set Valued Data via M-Privacy,”Advances in Engineering and Technology Research (ICAETR), 2014 International Conference on, vol., no., pp. 1, 6, 1-2 Aug. 2014. doi:10.1109/ICAETR.2014.7012814
Abstract: It is very important to achieve security of data in distributed databases. With increasing in the usability of distributed database security issues regarding it are also going to be more complex. M-privacy is a very effective technique which may be used to achieve security of distributed databases. Set-valued data provides huge opportunities for a variety of data mining tasks. Most of the present data publishing techniques for set-valued data are refers to horizontal division based privacy models. Differential privacy method is totally opposite to horizontal based privacy method; it provides higher privacy guarantee and it is also so vereign of an adversary's environment information and computational capability. Set-valued data have high dimensionality so not any single existing data publishing approach for differential privacy can be applied for both utility and scalability. This work provided detailed information about this new threat, and gave some assistance to resolve it. At the start we introduced the concept of m-privacy. This concept guarantees that the anonymous data will satisfies a given privacy check next to any group of up to m colluding data providers. After it we presented heuristic approach for exploiting the monotonicity of confidentiality constraints for proficiently inspecting m-privacy given a cluster of records. Next, we have presented a data provider-aware anonymization approach with adaptive m-privacy inspection strategies to guarantee high usefulness and m-privacy of anonymized data with effectiveness. Finally, we proposed secured multi-party calculation protocols for set valued data publishing with m-privacy.
Keywords: data mining; data privacy; distributed databases; adaptive m-privacy inspection strategies; anonymous data; computational capability; confidentiality constraints monotonicity; data mining tasks; data provider-aware anonymization approach; data security; distributed database security; environment information; heuristic approach; horizontal division based privacy models; privacy check; privacy guarantee; privacy method; secured multiparty calculation protocols; set-valued data publishing techniques; threat; Algorithm design and analysis; Computational modeling; Data privacy; Distributed databases; Privacy; Publishing; Taxonomy; data mining; privacy; set-valued dataset (ID#: 15-6095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012814&isnumber=7012782
Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Convex Optimization with Piecewise Affine Objectives,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2160, 2166, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039718
Abstract: Differential privacy is a recently proposed notion of privacy that provides strong privacy guarantees without any assumptions on the adversary. The paper studies the problem of computing a differentially private solution to convex optimization problems whose objective function is piecewise affine. Such problems are motivated by applications in which the affine functions that define the objective function contain sensitive user information. We propose several privacy preserving mechanisms and provide an analysis on the trade-offs between optimality and the level of privacy for these mechanisms. Numerical experiments are also presented to evaluate their performance in practice.
Keywords: data privacy; optimisation; affine functions; convex optimization problems; differentially private convex optimization; differentially private solution; piecewise affine objectives; privacy guarantees; privacy preserving mechanisms; sensitive user information; Convex functions; Data privacy; Databases; Linear programming; Optimization; Privacy; Sensitivity (ID#: 15-6096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039718&isnumber=7039338
Jia Dong Zhang; Ghinita, G.; Chi Yin Chow, “Differentially Private Location Recommendations in Geosocial Networks,” Mobile Data Management (MDM), 2014 IEEE 15th International Conference on, vol. 1, no., pp. 59, 68, 14-18 July 2014. doi:10.1109/MDM.2014.13
Abstract: Location-tagged social media have an increasingly important role in shaping behavior of individuals. With the help of location recommendations, users are able to learn about events, products or places of interest that are relevant to their preferences. User locations and movement patterns are available from geosocial networks such as Foursquare, mass transit logs or traffic monitoring systems. However, disclosing movement data raises serious privacy concerns, as the history of visited locations can reveal sensitive details about an individual's health status, alternative lifestyle, etc. In this paper, we investigate mechanisms to sanitize location data used in recommendations with the help of differential privacy. We also identify the main factors that must be taken into account to improve accuracy. Extensive experimental results on real-world datasets show that a careful choice of differential privacy technique leads to satisfactory location recommendation results.
Keywords: data privacy; recommender systems; social networking (online); differentially private location recommendations; geosocial networks; location data sanitation; Data privacy; History; Indexes; Markov processes; Privacy; Trajectory; Vegetation (ID#: 15-6097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916904&isnumber=6916883
Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Distributed Protocol for Electric Vehicle Charging,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 242, 249, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028462
Abstract: In distributed electric vehicle (EV) charging, an optimization problem is solved iteratively between a central server and the charging stations by exchanging coordination signals that are publicly available to all stations. The coordination signals depend on user demand reported by charging stations and may reveal private information of the users at the stations. From the public signals, an adversary can potentially decode private user information and put user privacy at risk. This paper develops a distributed EV charging algorithm that preserves differential privacy, which is a notion of privacy recently introduced and studied in theoretical computer science. The algorithm is based on the so-called Laplace mechanism, which perturbs the public signal with Laplace noise whose magnitude is determined by the sensitivity of the public signal with respect to changes in user information. The paper derives the sensitivity and analyzes the suboptimality of the differentially private charging algorithm. In particular, we obtain a bound on suboptimality by viewing the algorithm as an implementation of stochastic gradient descent. In the end, numerical experiments are performed to investigate various aspects of the algorithm when being used in practice, including the number of iterations and tradeoffs between privacy level and suboptimality.
Keywords: electric vehicles; gradient methods; protocols; stochastic programming; Laplace mechanism; Laplace noise; central server; differential private charging algorithm; differential private distributed protocol; distributed EV charging algorithm; distributed electric vehicle charging station; optimization problem; public signal sensitivity; stochastic gradient descent; theoretical computer science; user demand; Charging stations; Data privacy; Databases; Optimization; Privacy; Sensitivity; Vehicles (ID#: 15-6098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028462&isnumber=7028426
Jianwei Chen; Huadong Ma, “Privacy-Preserving Aggregation for Participatory Sensing with Efficient Group Management,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 2757, 2762, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037225
Abstract: Participatory sensing applications can learn the aggregate statistics over personal data to produce useful knowledge about the world. Since personal data may be privacy-sensitive, the aggregator should only gain desired statistics without learning anything about the personal data. To guarantee differential privacy of personal data under an untrusted aggregator, existing approaches encrypt the noisy personal data, and allow the aggregator to get a noisy sum. However, these approaches suffer from either high computation overhead, or lack of efficient group management to support dynamic joins and leaves, or node failures. In this paper, we propose a novel privacy-preserving aggregation scheme to address these issues in participatory sensing applications. In our scheme, we first design an efficient group management protocol to deal with participants' dynamic joins and leaves. Specifically, when a participant joins or leaves, only three participants need to update their encryption keys. Moreover, we leverage the future ciphertext buffering mechanism to deal with node failures, which is combined with the group management protocol making low communication overhead. The analysis indicates that our scheme achieves desired properties, and the performance evaluation demonstrates the scheme's efficiency in terms of communication and computation overhead.
Keywords: cryptographic protocols; data privacy; ciphertext buffering mechanism; group management protocol; noisy personal data; participatory sensing; personal data privacy; privacy-preserving aggregation scheme; untrusted aggregator; Aggregates; Fault tolerance; Fault tolerant systems; Noise; Noise measurement; Privacy; Sensors (ID#: 15-6099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037225&isnumber=7036769
Jongho Won; Ma, C.Y.T.; Yau, D.K.Y.; Rao, N.S.V., “Proactive Fault-Tolerant Aggregation Protocol for Privacy-Assured Smart Metering,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 2804, 2812, April 27 2014 - May 2 2014. doi:10.1109/INFOCOM.2014.6848230
Abstract: Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively by using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.
Keywords: fault tolerance; smart meters; ciphertexts; communication efficiency; electricity consumption; fail-stop faults; privacy-assured smart metering; proactive fault-tolerant aggregation protocol; secret key sharing; Bandwidth; Fault tolerance; Fault tolerant systems; Noise; Privacy; Protocols; Smart meters (ID#: 15-6100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848230&isnumber=6847911
Pritha, P.V.G.R.; Suresh, N., “Implementation of Hummingbird 1s Cryptographic Algorithm for Low Cost RFID Tags Using LabVIEW,” Information Communication and Embedded Systems (ICICES), 2014 International Conference on, vol, no., pp. 1, 4, 27-28 Feb. 2014. doi:10.1109/ICICES.2014.7034182
Abstract: Hummingbird is a novel ultra-light weight cryptographic encryption scheme used for RFID applications of privacy-preserving identification and mutual authentication protocols, motivated by the well-known enigma machine. Hummingbird has a precise response time and the design of small block size will reduce the power consumption requirements. This algorithm is shown as it prevents from the common attacks like Linear and differential cryptanalysis. The properties of privacy identification and mutual authentication are together investigated in this algorithm. This is implemented using the LABVIEW software.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; virtual instrumentation; Hummingbird 1s cryptographic algorithm; LabVIEW software; RFID tags; differential cryptanalysis; enigma machine; linear cryptanalysis; mutual authentication protocols; privacy-preserving identification; ultra-light weight cryptographic encryption scheme; Algorithm design and analysis; Authentication; Ciphers; Encryption; Radiofrequency identification; Software; lightweight cryptography scheme; mutual authentication protocols; privacy-preserving identification (ID#: 15-6101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034182&isnumber=7033740
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Game Theoretic Security, 2014 |
Game theory has historically been the provenance of social sciences such as economics, political science, and psychology. Game theory has developed into an umbrella term for the logical side of science that includes both human and non-human actors like computers. It has been used extensively in wireless networks research to develop understanding of stable operation points for networks made of autonomous/selfish nodes. The nodes are considered as the players. Utility functions are often chosen to correspond to achieved connection rate or similar technical metrics. In security, the computer game framework is used to anticipate and analyze intruder and administrator concurrent interactions within the network. Research cited here was presented in 2013 and 2014.
Jinghao Shi, Zhangyu Guan, Chunming Qiao, Tommaso Melodia, Dimitrios Koutsonikolas, Geoffrey Challen. “Crowdsourcing Access Network Spectrum Allocation Using Smartphones.” HotNets-XIII Proceedings of the 13th ACM Workshop on Hot Topics in Networks, October 2014, Pages 17. doi:10.1145/2670518.2673866
Abstract: The hundreds of millions of deployed smartphones provide an unprecedented opportunity to collect data to monitor, debug, and continuously adapt wireless networks to improve performance. In contrast with previous mobile devices, such as laptops, smartphones are always on but mostly idle, making them available to perform measurements that help other nearby active devices make better use of available network resources. We present the design of PocketSniffer, a system delivering wireless measurements from smartphones both to network administrators for monitoring and debugging purposes and to algorithms performing realtime network adaptation. By collecting data from smartphones, PocketSniffer supports novel adaptation algorithms designed around common deployment scenarios involving both cooperative and self-interested clients and networks. We present preliminary results from a prototype and discuss challenges to realizing this vision.
Keywords: Smartphones, crowdsourcing, monitoring (ID#: 15-5880)
URL: http://doi.acm.org/10.1145/2670518.2673866
Gilles Barthe, Cédric Fournet, Benjamin Grégoire, Pierre-Yves Strub, Nikhil Swamy, Santiago Zanella-Béguelin. “Probabilistic Relational Verification for Cryptographic Implementations.” POPL '14 Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2014, Pages 193-205. doi:10.1145/2535838.2535847
Abstract: Relational program logics have been used for mechanizing formal proofs of various cryptographic constructions. With an eye towards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types. The distinguishing feature of F* is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through careful language design, we adapt the F* typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F*, including its abstraction facilities for modular reasoning about program fragments. We evaluate RF* experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security properties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF* by formalizing in Coq a core probabilistic λ calculus and a relational refinement type system and proving the soundness of the latter against a denotational semantics of the probabilistic lambda λ calculus.
Keywords: probabilistic programming, program logics (ID#: 15-5881)
URL: http://doi.acm.org/10.1145/2535838.2535847
Patrick McDaniel, Trent Jaeger, Thomas F. La Porta, Nicolas Papernot, Robert J. Walls, Alexander Kott, Lisa Marvel, Ananthram Swami, Prasant Mohapatra, Srikanth V. Krishnamurthy, Iulian Neamtiu. “Security and Science of Agility.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 13-19. doi:10.1145/2663474.2663476
Abstract: Moving target defenses alter the environment in response to adversarial action and perceived threats. Such defenses are a specific example of a broader class of system management techniques called system agility. In its fullest generality, agility is any reasoned modification to a system or environment in response to a functional, performance, or security need. This paper details a recently launched 10-year Cyber-Security Collaborative Research Alliance effort focused in-part on the development of a new science of system agility, of which moving target defenses are a central theme. In this context, the consortium seeks to address the questions of when, what, and how to employ changes to improve the security of an environment, as well as consider how to measure and weigh the effectiveness of different approaches to agility. We discuss several fundamental challenges in developing and using MTD maneuvers, and outline several broad classes of mechanisms that can be used to implement them. We conclude by detailing specific MTD mechanisms used to adaptively quarantine vulnerable code in Android applications, and consider ways of comparing cost and payout of its use.
Keywords: agility, moving target defenses (ID#: 15-5882)
URL: http://doi.acm.org/10.1145/2663474.2663476
Prabhu Natarajan, Trong Nghia Hoang, Yongkang Wong, Kian Hsiang Low, Mohan Kankanhalli. “Scalable Decision-Theoretic Coordination and Control for Real-time Active Multi-Camera Surveillance.” ICDSC '14 Proceedings of the International Conference on Distributed Smart Cameras, November 2014, Article No. 38. doi:10.1145/2659021.2659042
Abstract: This paper presents an overview of our novel decision-theoretic multi-agent approach for controlling and coordinating multiple active cameras in surveillance. In this approach, a surveillance task is modeled as a stochastic optimization problem, where the active cameras are controlled and coordinated to achieve the desired surveillance goal in presence of uncertainties. We enumerate the practical issues in active camera surveillance and discuss how these issues are addressed in our decision-theoretic approach. We focus on two novel surveillance tasks: maximize the number of targets observed in active cameras with guaranteed image resolution and to improve the fairness in observation of multiple targets. We discuss the overview of our novel decision-theoretic frameworks: Markov Decision Process and Partially Observable Markov Decision Process frameworks for coordinating active cameras in uncertain and partially occluded environments.
Keywords: Active camera networks, Multi-camera coordination and control, Smart camera networks, Surveillance and security (ID#: 15-5883)
URL: http://doi.acm.org/10.1145/2659021.2659042
Koen Claessen, Michał H. Pałka. “Splittable Pseudorandom Number Generators Using Cryptographic Hashing.” Haskell '13 Proceedings of the 2013 ACM SIGPLAN Symposium on Haskell, September 2013, Pages 47-58. doi:10.1145/2503778.2503784
Abstract: We propose a new splittable pseudorandom number generator (PRNG) based on a cryptographic hash function. Splittable PRNGs, in contrast to linear PRNGs, allow the creation of two (seemingly) independent generators from a given random number generator. Splittable PRNGs are very useful for structuring purely functional programs, as they avoid the need for threading around state. We show that the currently known and used splittable PRNGs are either not efficient enough, have inherent flaws, or lack formal arguments about their randomness. In contrast, our proposed generator can be implemented efficiently, and comes with a formal statement and proofs that quantify how 'random' the results are that are generated. The provided proofs give strong randomness guarantees under assumptions commonly made in cryptography.
Keywords: haskell, provable security, splittable pseudorandom number generators (ID#: 15-5884)
URL: http://dl.acm.org/citation.cfm?doid=2503778.2503784
Fatemeh Vafaee. “Learning the Structure of Large-Scale Bayesian Networks Using Genetic Algorithm.” GECCO '14 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 855-862. doi:10.1145/2576768.2598223
Abstract: Bayesian networks are probabilistic graphical models representing conditional dependencies among a set of random variables. Due to their concise representation of the joint probability distribution, Bayesian Networks are becoming incrementally popular models for knowledge representation and reasoning in various problem domains. However, learning the structure of the Bayesian networks is an NP-hard problem since the number of structures grows super-exponentially as the number of variables increases. This work therefore is aimed to propose a new hybrid structure learning algorithm that uses mutual dependencies to reduce the search space complexity and recruits the genetic algorithm to effectively search over the reduced space of possible structures. The proposed method is best suited for problems with medium to large number of variables and a limited dataset. It is shown that the proposed method achieves higher model's accuracy as compared to a series of popular structure learning algorithms particularly when the data size gets smaller.
Keywords: Bayesian networks, genetic algorithms, structure learning (ID#: 15-5885)
URL: http://doi.acm.org/10.1145/2576768.2598223
Yitao Duan. “Distributed Key Generation for Encrypted Deduplication: Achieving the Strongest Privacy.” CCSW '14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 57-68. doi:10.1145/2664168.2664169
Abstract: Large-scale cloud storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent Encryption [27], and evolved into Message Locked Encryption [13] and the latest DupLESS architecture [12], strives to solve this problem. DupLESS relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first introduce a new security notion appropriate for the setting of deduplication and show that it is strictly stronger than all relevant notions. We then provide a rigorous proof of security against this notion, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm. We also introduce a distributed protocol that eliminates the need for the key server. This not only provides better protection but also allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.
Keywords: cloud computing security, deduplication, deterministic encryption (ID#: 15-5886)
URL: http://doi.acm.org/10.1145/2664168.2664169
Javier Cámara, Gabriel A. Moreno, David Garlan. “Stochastic Game Analysis and Latency Awareness for Proactive Self-Adaptation.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 155-164. doi:10.1145/2593929.2593933
Abstract: Although different approaches to decision-making in self-adaptive systems have shown their effectiveness in the past by factoring in predictions about the system and its environment (e.g., resource availability), no proposal considers the latency associated with the execution of tactics upon the target system. However, different adaptation tactics can take different amounts of time until their effects can be observed. In reactive adaptation, ignoring adaptation tactic latency can lead to suboptimal adaptation decisions (e.g., activating a server that takes more time to boot than the transient spike in traffic that triggered its activation). In proactive adaptation, taking adaptation latency into account is necessary to get the system into the desired state to deal with an upcoming situation. In this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the potential benefits of employing dierent types of algorithms for self-adaptation. In particular, we apply this technique to show the potential benefit of considering adaptation tactic latency in proactive adaptation algorithms. Our results show that factoring in tactic latency in decision making improves the outcome of adaptation. We also present an algorithm to do proactive adaptation that considers tactic latency, and show that it achieves higher utility than an algorithm that under the assumption of no latency is optimal.
Keywords: Latency, Proactive adaptation, Stochastic multiplayer games (ID#: 15-5887)
URL: http://doi.acm.org/10.1145/2593929.2593933
Chunyao Song, Tingjian Ge. “Aroma: A New Data Protection Method with Differential Privacy and Accurate Query Answering.” CIKM '14 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, November 2014, Pages 1569-1578. doi:10.1145/2661829.2661886
Abstract: We propose a new local data perturbation method called Aroma. We first show that Aroma is sound in its privacy protection. For that, we devise a realistic privacy game, called the exposure test. We prove that the αβ algorithm, a previously proposed method that is most closely related to Aroma, performs poorly under the exposure test and fails to provide sufficient privacy in practice. Moreover, any data protection method that satisfies ε-differential privacy will succeed in the test. By proving that Aroma satisfies ε-differential privacy, we show that Aroma offers strong privacy protection. We then demonstrate the utility of Aroma by proving that its estimator has significantly smaller errors than the previous state-of-the-art algorithms such as αβ, AM, and FRAPP. We carry out a systematic empirical study using real-world data to evaluate Aroma, which shows its clear advantages over previous methods.
Keywords: data perturbation, differential privacy, query (ID#: 15-5888)
URL: http://doi.acm.org/10.1145/2661829.2661886
Florian Hahn, Florian Kerschbaum. “Searchable Encryption with Secure and Efficient Updates.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 310-320. doi:10.1145/2660267.2660297
Abstract: Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient -- i.e. have sub-linear search time --, dynamic -- i.e. allow updates -- and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
Keywords: dynamic searchable encryption, searchable encryption, secure index, update (ID#: 15-5889)
URL: http://doi.acm.org/10.1145/2660267.2660297
Itay Berman, Iftach Haitner, Aris Tentes. “Coin Flipping of Any Constant Bias Implies One-Way Functions.” STOC '14 Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 398-407. doi:10.1145/2591796.2591845
Abstract: We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias [EQUATION] -- o(1) ≈ .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.
Keywords: coin-flipping protocols, minimal hardness assumptions, one-way functions (ID#: 15-5890)
URL: http://doi.acm.org/10.1145/2591796.2591845
Hossain Shahriar, Hisham M. Haddad. “Content Provider Leakage Vulnerability Detection in Android Applications.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 359. doi:10.1145/2659651.2659716
Abstract: Although much research effort has focused on Android malware detection, very little attention has been given to implementation-level vulnerabilities. This paper focuses on Content Provider Leakage vulnerability that can be exploited by viewing or editing sensitive data through malware. We present a new technique for detecting content provider leakage vulnerability. We propose Kullback-Leibler Divergence (KLD) as a measure to detect the content provider leakage vulnerability. In particular, our contribution includes the development of a set of elements and mapping the elements to programming principles for secure implementation of content provider classes. These elements are captured from the implementation to form the initial population set. The population set is used to measure the divergence of a newly implemented application with content provider to identify potential vulnerabilities. We also apply a back-off smoothing technique to compute the KLD value. We implement a java prototype tool to evaluate a set of content provider implementations to show the effectiveness of the proposed approach. The initial results show that by choosing an appropriate threshold level, KLD is an effective method for detecting content provider leakage vulnerability.
Keywords: Android Application, Content Provider Vulnerability, Kullback-Leibler Divergence, SQL Injection, Secure Programming (ID#: 15-5891)
URL: http://doi.acm.org/10.1145/2659651.2659716
Yunhua He, Limin Sun, Zhi Li, Hong Li, Xiuzhen Cheng. “An Optimal Privacy-Preserving Mechanism for Crowdsourced Traffic Monitoring.” FOMC '14 Proceedings of the 10th ACM International Workshop on Foundations of Mobile Computing, August 2014, Pages 11-18. doi:10.1145/2634274.2634275
Abstract: Crowdsourced traffic monitoring employs ubiquitous smartphone users to upload their GPS samples for traffic estimation and prediction. The accuracy of traffic estimation and prediction depends on the number of uploaded samples; but more samples from a user increases the probability of the user being tracked or identified, which raises a significant privacy concern. In this paper, we propose a privacy-preserving upload mechanism that can meet users\textquoteright~diverse privacy requirements while guaranteeing the traffic estimation quality. In this mechanism, the user upload decision process is formalized as a mutual objective optimization problem (user location privacy and traffic service quality) based on an incomplete information game model, in which each player can autonomously decide whether to upload or not to balance the live traffic service quality and its own location privacy for utility maximization. We theoretically prove the incentive compatibility of our proposed mechanism, which can motivate users to follow the game rules. The effectiveness of the proposed mechanism is verified by a simulation study based on real world traffic data.
Keywords: crowdsourcing, game theory, location privacy (ID#: 15-5892)
URL: http://doi.acm.org/10.1145/2634274.2634275
Kevin M. Carter, James F. Riordan, Hamed Okhravi. “A Game Theoretic Approach to Strategy Determination for Dynamic Platform Defenses.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 21-30. doi:10.1145/2663474.2663478
Abstract: Moving target defenses based on dynamic platforms have been proposed as a way to make systems more resistant to attacks by changing the properties of the deployed platforms. Unfortunately, little work has been done on discerning effective strategies for the utilization of these systems, instead relying on two generally false premises: simple randomization leads to diversity and platforms are independent. In this paper, we study the strategic considerations of deploying a dynamic platform system by specifying a relevant threat model and applying game theory and statistical analysis to discover optimal usage strategies. We show that preferential selection of platforms based on optimizing platform diversity approaches the statistically optimal solution and significantly outperforms simple randomization strategies. Counter to popular belief, this deterministic strategy leverages fewer platforms than may be generally available, which increases system security.
Keywords: game theory, moving target, system diversity (ID#: 15-5893)
URL: http://doi.acm.org/10.1145/2663474.2663478
Eli A. Meirom, Shie Mannor, Ariel Orda. “Network Formation Games with Heterogeneous Players and the Internet Structure.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 735-752. doi:10.1145/2600057.2602862
Abstract: We study the structure and evolution of the Internet's Autonomous System (AS) interconnection topology as a game with heterogeneous players. In this network formation game, the utility of a player depends on the network structure, e.g., the distances between nodes and the cost of links. We analyze static properties of the game, such as the prices of anarchy and stability and provide explicit results concerning the generated topologies. Furthermore, we discuss dynamic aspects, demonstrating linear convergence rate and showing that only a restricted subset of equilibria is feasible under realistic dynamics. We also consider the case where utility (or monetary) transfers are allowed between the players.
Keywords: dynamic network formation games, game theory, inter-as topology; as heterogeneity, internet evolution (ID#: 15-5894)
URL: http://doi.acm.org/10.1145/2600057.2602862
Jianye Hao, Eunsuk Kang, Daniel Jackson, Jun Sun. “Adaptive Defending Strategy for Smart Grid Attacks.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 23-30. doi:10.1145/2667190.2667195
Abstract: One active area of research in smart grid security focuses on applying game-theoretic frameworks to analyze interactions between a system and an attacker and formulate effective defense strategies. In previous work, a Nash equilibrium (NE) solution is chosen as the optimal defense strategy, which [7, 9] implies that the attacker has complete knowledge of the system and would also employ the corresponding NE strategy. In practice, however, the attacker may have limited knowledge and resources, and thus employ an attack which is less than optimal, allowing the defender to devise more efficient strategies. We propose a novel approach called an adaptive Markov Strategy (AMS) for defending a system against attackers with unknown, dynamic behaviors. The algorithm for computing an AMS is theoretically guaranteed to converge to a best response strategy against any stationary attacker, and also converge to a Nash equilibrium if the attacker is sufficiently intelligent to employ the AMS to launch the attack. To evaluate the effectiveness of an AMS in smart grid systems, we study a class of data integrity attacks that involve injecting false voltage information into a substation, with the goal of causing load shedding (and potentially a blackout). Our preliminary results show that the amount of load shedding costs can be significantly reduced by employing an AMS over a NE strategy.
Keywords: adaptive learning, data injection, markov games, smart grid security (ID#: 15-5895)
URL: http://doi.acm.org/10.1145/2667190.2667195
Euijin Choo, Jianchun Jiang, Ting Yu. “COMPARS: Toward an Empirical Approach for Comparing the Resilience of Reputation Systems.” CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 87-98. doi:10.1145/2557547.2557565
Abstract: Reputation is a primary mechanism for trust management in decentralized systems. Many reputation-based trust functions have been proposed in the literature. However, picking the right trust function for a given decentralized system is a non-trivial task. One has to consider and balance a variety of factors, including computation and communication costs, scalability and resilience to manipulations by attackers. Although the former two are relatively easy to evaluate, the evaluation of resilience of trust functions is challenging. Most existing work bases evaluation on static attack models, which is unrealistic as it fails to reflect the adaptive nature of adversaries (who are often real human users rather than simple computing agents). In this paper, we highlight the importance of the modeling of adaptive attackers when evaluating reputation-based trust functions, and propose an adaptive framework -- called COMPARS -- for the evaluation of resilience of reputation systems. Given the complexity of reputation systems, it is often difficult, if not impossible, to exactly derive the optimal strategy of an attacker. Therefore, COMPARS takes a practical approach that attempts to capture the reasoning process of an attacker as it decides its next action in a reputation system. Specifically, given a trust function and an attack goal, COMPARS generates an attack tree to estimate the possible outcomes of an attacker's action sequences up to certain points in the future. Through attack trees, COMPARS simulates the optimal attack strategy for a specific reputation function f, which will be used to evaluate the resilience of f. By doing so, COMPARS allows one to conduct a fair and consistent comparison of different reputation functions.
Keywords: evaluation framework, reputation system, resilience, trust functions (ID#: 15-5896)
URL: http://doi.acm.org/10.1145/2557547.2557565
Ryan M. Rogers, Aaron Roth. “Asymptotically Truthful Equilibrium Selection in Large Congestion Games.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 771-782. doi:10.1145/2600057.2602856
Abstract: Studying games in the complete information model makes them analytically tractable. However, large n player interactions are more realistically modeled as games of incomplete information, where players may know little to nothing about the types of other players. Unfortunately, games in incomplete information settings lose many of the nice properties of complete information games: the quality of equilibria can become worse, the equilibria lose their ex-post properties, and coordinating on an equilibrium becomes even more difficult. Because of these problems, we would like to study games of incomplete information, but still implement equilibria of the complete information game induced by the (unknown) realized player types. This problem was recently studied by Kearns et al. [Kearns et al. 2014], and solved in large games by means of introducing a weak mediator: their mediator took as input reported types of players, and output suggested actions which formed a correlated equilibrium of the underlying game. Players had the option to play independently of the mediator, or ignore its suggestions, but crucially, if they decided to opt-in to the mediator, they did not have the power to lie about their type. In this paper, we rectify this deficiency in the setting of large congestion games. We give, in a sense, the weakest possible mediator: it cannot enforce participation, verify types, or enforce its suggestions. Moreover, our mediator implements a Nash equilibrium of the complete information game. We show that it is an (asymptotic) ex-post equilibrium of the incomplete information game for all players to use the mediator honestly, and that when they do so, they end up playing an approximate Nash equilibrium of the induced complete information game. In particular, truthful use of the mediator is a Bayes-Nash equilibrium in any Bayesian game for any prior.
Keywords: algorithms, differential privacy, mechanism design (ID#: 15-5897)
URL: http://doi.acm.org/10.1145/2600057.2602856
Minzhe Guo, Prabir Bhattacharya. “Diverse Virtual Replicas for Improving Intrusion Tolerance in Cloud.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 41-44. doi:10.1145/2602087.2602116
Abstract: Intrusion tolerance is important for services in cloud to continue functioning while under attack. Byzantine fault-tolerant replication is considered a fundamental component of intrusion tolerant systems. However, the monoculture of replicas can render the theoretical properties of Byzantine fault-tolerant system ineffective, even when proactive recovery techniques are employed. This paper exploits the design diversity available from off-the-shelf operating system products and studies how to diversify the configurations of virtual replicas for improving the resilience of the service in the presence of attacks. A game-theoretic model is proposed for studying the optimal diversification strategy for the system defender and an efficient algorithm is designed to approximate the optimal defense strategies in large games.
Keywords: diversity, intrusion tolerance, virtual replica (ID#: 15-5898)
URL: http://doi.acm.org/10.1145/2602087.2602116
Gilles Barthe, François Dupressoir, Pierre-Alain Fouque, Benjamin Grégoire, Jean-Christophe Zapalowicz. “Synthesis of Fault Attacks on Cryptographic Implementations.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1016-1027. doi:10.1145/2660267.2660304
Abstract: Fault attacks are attacks in which an adversary with physical access to a cryptographic device, say a smartcard, tampers with the execution of an algorithm to retrieve secret material. Since the seminal Bellcore attack on modular exponentiation, there has been extensive work to discover new fault attacks against cryptographic schemes and develop countermeasures against such attacks. Originally focused on high-level algorithmic descriptions, these efforts increasingly focus on concrete implementations. While lowering the abstraction level leads to new fault attacks, it also makes their discovery significantly more challenging. In order to face this trend, it is therefore desirable to develop principled, tool-supported approaches that allow a systematic analysis of the security of cryptographic implementations against fault attacks. We propose, implement, and evaluate a new approach for finding fault attacks against cryptographic implementations. Our approach is based on identifying implementation-independent mathematical properties, or fault conditions. We choose fault conditions so that it is possible to recover secret data purely by computing on sufficiently many data points that satisfy them. Fault conditions capture the essence of a large number of attacks from the literature, including lattice-based attacks on RSA. Moreover, they provide a basis for discovering automatically new attacks: using fault conditions, we specify the problem of finding faulted implementations as a program synthesis problem. Using a specialized form of program synthesis, we discover multiple faulted attacks on RSA and ECDSA. Several of the attacks found by our tool are new, and of independent interest.
Keywords: automated proofs, fault attacks, program synthesis, program verification (ID#: 15-5899)
URL: http://doi.acm.org/10.1145/2660267.2660304
Christian Kroer, Tuomas Sandholm. “Extensive-Form Game Abstraction With Bounds.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 621-638. doi:10.1145/2600057.2602905
Abstract: Abstraction has emerged as a key component in solving extensive-form games of incomplete information. However, lossless abstractions are typically too large to solve, so lossy abstraction is needed. All prior lossy abstraction algorithms for extensive-form games either 1) had no bounds on solution quality or 2) depended on specific equilibrium computation approaches, limited forms of abstraction, and only decreased the number of information sets rather than nodes in the game tree. We introduce a theoretical framework that can be used to give bounds on solution quality for any perfect-recall extensive-form game. The framework uses a new notion for mapping abstract strategies to the original game, and it leverages a new equilibrium refinement for analysis. Using this framework, we develop the first general lossy extensive-form game abstraction method with bounds. Experiments show that it finds a lossless abstraction when one is available and lossy abstractions when smaller abstractions are desired. While our framework can be used for lossy abstraction, it is also a powerful tool for lossless abstraction if we set the bound to zero. Prior abstraction algorithms typically operate level by level in the game tree. We introduce the extensive-form game tree isomorphism and action subset selection problems, both important problems for computing abstractions on a level-by-level basis. We show that the former is graph isomorphism complete, and the latter NP-complete. We also prove that level-by-level abstraction can be too myopic and thus fail to find even obvious lossless abstractions.
Keywords: abstraction, equilibrium finding, extensive-form game (ID#: 15-5900)
URL: http://doi.acm.org/10.1145/2600057.2602905
Carlos Barreto, Alvaro A. Cárdenas, Nicanor Quijano, Eduardo Mojica-Nava. “CPS: Market Analysis of Attacks Against Demand Response in the Smart Grid.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 136-145. doi:10.1145/2664243.2664284
Abstract: Demand response systems assume an electricity retail-market with strategic electricity consuming agents. The goal in these systems is to design load shaping mechanisms to achieve efficiency of resources and customer satisfaction. Recent research efforts have studied the impact of integrity attacks in simplified versions of the demand response problem, where neither the load consuming agents nor the adversary are strategic. In this paper, we study the impact of integrity attacks considering strategic players (a social planner or a consumer) and a strategic attacker. We identify two types of attackers: (1) a malicious attacker who wants to damage the equipment in the power grid by producing sudden overloads, and (2) a selfish attacker that wants to defraud the system by compromising and then manipulating control (load shaping) signals. We then explore the resiliency of two different demand response systems to these fraudsters and malicious attackers. Our results provide guidelines for system operators deciding which type of demand-response system they want to implement, how to secure them, and directions for detecting these attacks.
Keywords: (not provided) (ID#: 15-5901)
URL: http://doi.acm.org/10.1145/2664243.2664284
Hongxin Hu, Gail-Joon Ahn, Ziming Zhao, Dejun Yang. “Game Theoretic Analysis of Multiparty Access Control in Online Social Networks.” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 93-102. doi:10.1145/2613087.2613097
Abstract: Existing online social networks (OSNs) only allow a single user to restrict access to her/his data but cannot provide any mechanism to enforce privacy concerns over data associated with multiple users. This situation leaves privacy conflicts largely unresolved and leads to the potential disclosure of users' sensitive information. To address such an issue, a MultiParty Access Control (MPAC) model was recently proposed, including a systematic approach to identify and resolve privacy conflicts for collaborative data sharing in OSNs. In this paper, we take another step to further study the problem of analyzing the strategic behavior of rational controllers in multiparty access control, where each controller aims to maximize her/his own benefit by adjusting her/his privacy setting in collaborative data sharing in OSNs. We first formulate this problem as a multiparty control game and show the existence of unique Nash Equilibrium (NE) which is critical because at an NE, no controller has any incentive to change her/his privacy setting. We then present algorithms to compute the NE and prove that the system can converge to the NE in only a few iterations. A numerical analysis is also provided for different scenarios that illustrate the interplay of controllers in the multiparty control game. In addition, we conduct user studies of the multiparty control game to explore the gap between game theoretic approaches and real human behaviors.
Keywords: game theory, multiparty access control, social networks (ID#: 15-5902)
URL: http://doi.acm.org/10.1145/2613087.2613097
Florian Kerschbaum, Axel Schroepfer. “Optimal Average-Complexity Ideal-Security Order-Preserving Encryption.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 275-286. doi:10.1145/2660267.2660277
Abstract: Order-preserving encryption enables performing many classes of queries -- including range queries -- on encrypted databases. Popa et al. recently presented an ideal-secure order-preserving encryption (or encoding) scheme, but their cost of insertions (encryption) is very high. In this paper we present an also ideal-secure, but significantly more efficient order-preserving encryption scheme. Our scheme is inspired by Reed's referenced work on the average height of random binary search trees. We show that our scheme improves the average communication complexity from O(n log n) to O(n) under uniform distribution. Our scheme also integrates efficiently with adjustable encryption as used in CryptDB. In our experiments for database inserts we achieve a performance increase of up to 81% in LANs and 95% in WANs.
Keywords: adjustable encryption, efficiency, ideal security, in-memory column database, indistinguishability, order-preserving encryption (ID#: 15-5903)
URL: http://doi.acm.org/10.1145/2660267.2660277
Rui Zhuang, Scott A. DeLoach, Xinming Ou. “Towards a Theory of Moving Target Defense.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 31-40. doi:10.1145/2663474.2663479
Abstract: The static nature of cyber systems gives attackers the advantage of time. Fortunately, a new approach, called the Moving Target Defense (MTD) has emerged as a potential solution to this problem. While promising, there is currently little research to show that MTD systems can work effectively in real systems. In fact, there is no standard definition of what an MTD is, what is meant by attack surface, or metrics to define the effectiveness of such systems. In this paper, we propose an initial theory that will begin to answer some of those questions. The paper defines the key concepts required to formally talk about MTD systems and their basic properties. It also discusses three essential problems of MTD systems, which include the MTD Problem (or how to select the next system configuration), the Adaptation Selection Problem, and the Timing Problem. We then formalize the MTD Entropy Hypothesis, which states that the greater the entropy of the system's configuration, the more effective the MTD system.
Keywords: computer security, moving target defense, network security, science of security (ID#: 15-5904)
URL: http://doi.acm.org/10.1145/2663474.2663479
Rattikorn Hewett, Sudeeptha Rudrapattana, Phongphun Kijsanayothin. “Cyber-Security Analysis of Smart Grid SCADA Systems with Game Models.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 109-112. doi:10.1145/2602087.2602089
Abstract: Smart grid SCADA (Supervisory Control and Data Acquisition) systems are key drivers to monitor, control and manage critical processes for the delivery and transmission of electricity in smart grids. Security attacks to such systems can have devastating effects on the functionality of the smart grids leading to electrical blackouts, economic losses or even fatalities. This paper presents an analytical game theoretic approach to analyzing security of SCADA smart grids by constructing a model of sequential, nonzero sum, two-player game between an attacker and a security administrator. The distinction of our work is the proposed development of game payoff formulae. A decision analysis can then be obtained by applying backward induction technique on the game tree derived from the proposed payoffs. The paper describes the development of the game payoffs and illustrates its analysis on a real-world scenario of Sybil and node compromised attacks at the sensor level of the smart grid SCADA systems.
Keywords: SCADA, SCADA security, game theory, payoffs, sequential games, utility function (ID#: 15-5905)
URL: http://doi.acm.org/10.1145/2602087.2602089
Umesh Vazirani, Thomas Vidick. “Robust Device Independent Quantum Key Distribution.” ITCS '14 Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, January 2014, Pages 35-36. doi:10.1145/2554797.2554802
Abstract: Quantum cryptography is based on the discovery that the laws of quantum mechanics allow levels of security that are impossible to replicate in a classical world. Can such levels of security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This fundamental question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated [9]. We answer this challenge affirmatively by exhibiting a robust protocol for DIQKD and rigorously proving its security. The protocol achieves a linear key rate while tolerating a constant noise rate in the devices. The security proof assumes only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and any adversary's laboratory. In particular, we emphasize that the devices may have quantum memory. All previous proofs of security relied either on the use of many independent pairs of devices, or on the absence of noise. To prove security for a DIQKD protocol it is necessary to establish at least that the generated key is truly random even in the presence of a quantum adversary. This is already a challenge, one that was recently resolved. DIQKD is substantially harder, since now the protocol must also guarantee that the key is completely secret from the quantum adversary's point of view, and the entire protocol is robust against noise; this in spite of the substantial amounts of classical information leaked to the adversary throughout the protocol, as part of the error estimation and information reconciliation procedures. Our proof of security builds upon a number of techniques, including randomness extractors that are secure against quantum storage as well as ideas originating in the coding strategy used in the proof of the Holevo-Schumacher-Westmoreland theorem which we apply to bound correlations across multiple rounds in a way not unrelated to information-theoretic proofs of the parallel repetition property for multiplayer games. Our main result can be understood as a new bound on monogamy of entanglement in the type of complex scenario that arises in a key distribution protocol. Precise statements of our results and detailed proofs can be found at arXiv:1210.1810.
Keywords: certified randomness, chsh game, device-independence, monogamy, quantum key distribution (ID#: 15-5906)
URL: http://doi.acm.org/10.1145/2554797.2554802
George Theodorakopoulos, Reza Shokri, Carmela Troncoso, Jean-Pierre Hubaux, Jean-Yves Le Boudec. “Prolonging the Hide-and-Seek Game: Optimal Trajectory Privacy for Location-Based Services.” WPES '14 Proceedings of the 13th Workshop on Privacy in the Electronic Society, November 2014, Pages 73-82. doi:10.1145/2665943.2665946
Abstract: Human mobility is highly predictable. Individuals tend to only visit a few locations with high frequency, and to move among them in a certain sequence reflecting their habits and daily routine. This predictability has to be taken into account in the design of location privacy preserving mechanisms (LPPMs) in order to effectively protect users when they expose their whereabouts to location-based services (LBSs) continuously. In this paper, we describe a method for creating LPPMs tailored to a user's mobility profile taking into her account privacy and quality of service requirements. By construction, our LPPMs take into account the sequential correlation across the user's exposed locations, providing the maximum possible trajectory privacy, i.e., privacy for the user's past, present location, and expected future locations. Moreover, our LPPMs are optimal against a strategic adversary, i.e., an attacker that implements the strongest inference attack knowing both the LPPM operation and the user's mobility profile. The optimality of the LPPMs in the context of trajectory privacy is a novel contribution, and it is achieved by formulating the LPPM design problem as a Bayesian Stackelberg game between the user and the adversary. An additional benefit of our formal approach is that the design parameters of the LPPM are chosen by the optimization algorithm.
Keywords: bayesian stackelberg game, location privacy, location transition privacy, optimal location obfuscation, privacy-utility tradeoff, trajectory privacy (ID#: 15-5907)
URL: http://doi.acm.org/10.1145/2665943.2665946
Martin Chapman, Gareth Tyson, Peter McBurney, Michael Luck, Simon Parsons. “Playing Hide-and-Seek: An Abstract Game for Cyber Security.” ACySE '14 Proceedings of the 1st International Workshop on Agents and CyberSecurity, May 2014, Article No. 3. doi:10.1145/2602945.2602946
Abstract: In order to begin to solve many of the problems in the domain of cyber security, they must first be transformed into abstract representations, free of complexity and paralysing technical detail. We believe that for many classic security problems, a viable transformation is to consider them as an abstract game of hide-and-seek. The tools required in this game -- such as strategic search and an appreciation of an opponent's likely strategies -- are very similar to the tools required in a number of cyber security applications, and thus developments in strategies for this game can certainly benefit the domain. In this paper we consider hide-and-seek as a formal game, and consider in depth how it is allegorical to the cyber domain, particularly in the problems of attack attribution and attack pivoting. Using this as motivation, we consider the relative performance of several hide and seek strategies using an agent-based simulation model, and present our findings as an initial insight into how to proceed with the solution of real cyber issues.
Keywords: agent-based modelling, cyber security, hide-and-seek games, search games (ID#: 15-5908)
URL: http://doi.acm.org/10.1145/2602945.2602946
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Measurement and Metrics: Testing, 2014 |
Measurement and metrics are hard problems in the Science of Security. The research cited here looks at methods and techniques of testing valid measurement. This work was presented in 2014.
Awad, F.; Taqieddin, E.; Mowafi, M.; Banimelhem, O.; AbuQdais, A., "A Simulation Testbed to Jointly Exploit Multiple Image Compression Techniques for Wireless Multimedia Sensor Networks," Wireless Communications Systems (ISWCS), 2014 11th International Symposium on, vol., no., pp. 905, 911, 26-29 Aug. 2014. doi:10.1109/ISWCS.2014.6933482
Abstract: As the demand for large-scale wireless multimedia sensor networks increases, so does the need for well-designed protocols that optimize the utilization of available networks resources. This requires experimental testing for realistic performance evaluation and design tuning. However, experimental testing of large-scale wireless networks using hardware testbeds is usually very hard to perform due to the need for collecting and monitoring the performance metrics data for multiple sensor nodes all at the same time, especially the node's energy consumption data. On the other hand, pure simulation testing may not accurately replicate the real-life scenarios, especially those parameters that are related to the wireless signal behavior in special environments. Therefore, this work attempts to close this gap between experimental and simulation testing. This paper presents a scalable simulation testbed that attempts to mimic our previously designed small-scale hardware testbed for wireless multimedia sensor networks by tuning the simulation parameters to match the real-life measurements obtained via experimental testing. The proposed simulation testbed embeds the JPEG and JPEG2000 image compression algorithms and potentially allows for network-controlled image compression and transmission decisions. The simulation results show very close match to the small-scale experimental testing as well as to the hypothetical large-scale extensions that were based on the experimental results.
Keywords: data compression; energy consumption; image coding; multimedia communication; protocols; wireless sensor networks; JPEG; JPEG2000 image compression algorithms; hardware testbeds; hypothetical large-scale extensions; jointly exploit multiple image compression techniques; large-scale wireless multimedia sensor networks; multiple sensor nodes; network resources; network-controlled image compression; node energy consumption data; performance metrics data collection; performance metrics data monitoring; scalable simulation testbed; small-scale hardware testbed; transmission decisions; well-designed protocols; wireless signal behavior; Energy consumption; Hardware; Image coding; Multimedia communication; Routing; Transform coding; Wireless sensor networks; Imote2; JPEG; JPEG2000; Simulation;Testbed; WMSN (ID#: 15-6045)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933482&isnumber=6933305
Kowtko, M.A., "Biometric Authentication for Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol., no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845213
Abstract: In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process.
Keywords: authorisation; biometrics (access control); invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA; TeleHealth industry; US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#: 15-6046)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845213&isnumber=6845183
Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol, no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845212
Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software. We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-ofbreed software assurance methods to prevent the v- lnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC;US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-6047)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183
Yihai Zhu; Jun Yan; Yufei Tang; Sun, Y.L.; Haibo He, "Coordinated Attacks Against Substations and Transmission Lines in Power Grids," Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 655, 661, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036882
Abstract: Vulnerability analysis on the power grid has been widely conducted from the substation-only and transmission-line-only perspectives. In order words, it is considered that attacks can occur on substations or transmission lines separately. In this paper, we naturally extend existing two perspectives and introduce the joint-substation-transmission-line's perspective, which means attacks can concurrently occur on substations and transmission lines. Vulnerabilities are referred to as these multiple-component combinations that can yield large damage to the power grid. One such combination consists of substations, transmission lines, or both. The new perspective is promising to discover more power grid vulnerabilities. In particular, we conduct the vulnerability analysis on the IEEE 39 bus system. Compared with known substation-only/transmission-line-only vulnerabilities, joint-substation-transmission-line vulnerabilities account for the largest percentage. Referring to three-component vulnerabilities, for instance, joint-substation-transmission-line vulnerabilities account for 76.06%; substation-only and transmission-line-only vulnerabilities account for 10.96% and 12.98%, respectively. In addition, we adopt two existing metrics, degree and load, to study the joint-substation-transmission-line attack strategy. Generally speaking, the joint-substation-transmission-line attack strategy based on the load metric has better attack performance than comparison schemes.
Keywords: power grids; power transmission reliability; substations; IEEE 39 bus system; coordinated attacks; joint-substation-transmission-line perspective; joint-substation-transmission-line vulnerabilities; load metric; multiple-component combinations; power grid vulnerabilities; vulnerability analysis; Benchmark testing; Measurement; Power grids; Power system faults; Power system protection; Power transmission lines; Substations; Attack; Cascading failures; Power grid security; Vulnerability analysis (ID#: 15-6048)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036882&isnumber=7036769
Duncan, I.; De Muijnck-Hughes, J., "Security Pattern Evaluation," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, vol., no., pp. 428, 429, 7-11 April 2014. doi:10.1109/SOSE.2014.61
Abstract: Current Security Pattern evaluation techniques are demonstrated to be incomplete with respect to quantitative measurement and comparison. A proposal for a dynamic testbed system is presented as a potential mechanism for evaluating patterns within a constrained environment.
Keywords: pattern classification; security of data; dynamic testbed system; security pattern evaluation; Complexity theory; Educational institutions; Measurement; Security; Software; Software reliability; Testing; evaluation; metrics; security patterns; testing (ID#: 15-6049)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830943&isnumber=6825948
Sanchez, A.B.; Segura, S.; Ruiz-Cortes, A., "A Comparison of Test Case Prioritization Criteria for Software Product Lines," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, vol., no., pp. 41, 50, March 31 2014 - April 4 2014. doi:10.1109/ICST.2014.15
Abstract: Software Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has been paid to the order in which the products are tested. Test case prioritization techniques reorder test cases to meet a certain performance goal. For instance, testers may wish to order their test cases in order to detect faults as soon as possible, which would translate in faster feedback and earlier fault correction. In this paper, we explore the applicability of test case prioritization techniques to SPL testing. We propose five different prioritization criteria based on common metrics of feature models and we compare their effectiveness in increasing the rate of early fault detection, i.e. a measure of how quickly faults are detected. The results show that different orderings of the same SPL suite may lead to significant differences in the rate of early fault detection. They also show that our approach may contribute to accelerate the detection of faults of SPL test suites based on combinatorial testing.
Keywords: fault diagnosis; program testing; SPL test suites; SPL testing; combinatorial testing; fault detection; software product line testing; test case prioritization criteria comparison; test case prioritization techniques; Analytical models; Complexity theory; Fault detection; Feature extraction; Measurement; Security; Testing; Software product lines; automated analysis; feature models. (ID#: 15-6050)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823864&isnumber=6823846
Zabasta, A.; Casaliccio, E.; Kunicina, N.; Ribickis, L., "A Numerical Model for Evaluation Power Outages Impact on Water Infrastructure Services Sustainability," Power Electronics and Applications (EPE'14-ECCE Europe), 2014 16th European Conference on, vol., no., pp.1,10, 26-28 Aug. 2014. doi:10.1109/EPE.2014.6910703
Abstract: Critical infrastructure's (CI) (electricity, heat, water, information and communication technology networks) security, stability and reliability are closely related to the interaction phenomenon. Due to the increasing amount of data transferred, increases dependence on telecommunications and internet services, the data integrity and security is becoming a very important aspect for the utility services providers and energy suppliers. In such circumstances, the need is increasing for methods and tools that enable infrastructure managers to evaluate and predict their critical infrastructure operations as the failures, emergency or service degradation occur in other related infrastructures. Using a simulation model, is experimentally tested a method that allows to explore the water supply network nodes the average down time dependence on the battery life time and the battery replacement time cross-correlations, within the parameters set, when outages in power infrastructure arise and taking into account also the impact of telecommunication nodes. The model studies the real case of Latvian city Ventspils. The proposed approach for the analysis of critical infrastructures interdependencies will be useful for practical adoption of methods, models and metrics for CI operators and stakeholders.
Keywords: critical infrastructures; polynomial approximation; power system reliability; power system security; power system stability; water supply; CI operators; average down time dependence; battery life time; battery replacement time cross-correlations; critical infrastructure operations; critical infrastructure security; critical infrastructures interdependencies; data integrity; data security; energy suppliers; infrastructure managers; interaction phenomenon; internet services; power infrastructure outages; stakeholders; telecommunication nodes; utility services providers; water supply network nodes; Analytical models; Batteries; Mathematical model; Measurement; Power supplies; Telecommunications; Unified modeling language; Estimation technique; Fault tolerance; Modelling; Simulation (ID#: 15-6051)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910703&isnumber=6910682
Hemanidhi, A.; Chimmanee, S.; Sanguansat, P., "Network Risk Evaluation from Security Metric of Vulnerability Detection Tools," TENCON 2014 - 2014 IEEE Region 10 Conference, vol., no., pp. 1, 6, 22-25 Oct. 2014. doi:10.1109/TENCON.2014.7022358
Abstract: Network Security is always a major concern in any organizations. To ensure that the organization network is well prevented from attackers, vulnerability assessment and penetration testing are implemented regularly. However, it is a highly time-consuming procedure to audit and analysis these testing results depending on administrator's expertise. Thus, security professionals prefer proactive-automatic vulnerability detection tools to identify vulnerabilities before they are exploited by an adversary. Although these vulnerability detection tools show that they are very useful for security professionals to audit and analysis much faster and more accurate, they have some important weaknesses as well. They only identify surface vulnerabilities and are unable to address the overall risk level of the scanned network. Also, they often use different standard for network risk level classification which habitually related to some organizations or vendors. Thus, these vulnerability detection tools are likely to, more or less, classify risk evaluation biasedly. This article presents a generic idea of “Network Risk Metric” as an unbiased risk evaluation from several vulnerability detection tools. In this paper, NetClarity (hardware-based), Nessus (software-based), and Retina (software-based) are implemented on two networks from an IT department of the Royal Thai Army (RTA). The proposed metric is applied for evaluating overall network risk from these three vulnerability detection tools. The result is a more accurate risk evaluation for each network.
Keywords: business data processing; computer crime; computer network performance evaluation; computer network security; IT department; Nessus; NetClarity; RTA; Retina; Royal Thai Army; attackers; hardware-based; network risk evaluation; network risk level classification; network risk metric; network security; organization network; proactive-automatic vulnerability detection tools; security metric; security professionals; software-based; unbiased risk evaluation; vulnerabilities identification; vulnerability assessment; vulnerability penetration testing; Equations; Measurement; Retina; Security; Servers; Software; Standards organizations; Network Security; Risk Evaluation; Security Metrics; Vulnerability Detection (ID#: 15-6052)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022358&isnumber=7021863
Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., "OutMet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-6053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725
Gaurav, C.; Chandramouleeswaran, D.; Khanam, R., "Progressive Testbed Application for Performance Analysis in Real Time Ad Hoc Networks Using SAP HANA," Advances in Computing and Communications (ICACC), 2014 Fourth International Conference on, vol., no., pp. 171, 174, 27-29 Aug. 2014. doi:10.1109/ICACC.2014.48
Abstract: This paper proposes and subsequently delineates quantification of network security metrics using software defined networking approach in real time using a progressive testbed. This comprehensive testbed implements computation of trust values which lend sentient decision making qualities to the participant nodes in a network and fortify it against threats like blackhole and flooding attacks. AODV and OLSR protocols were tested in real time under ideal and malicious environment using the testbed as the controlling point. With emphasis on reliability, interpreting voluminous data, monitoring attacks immediately with negligible time lag, the paper concludes by justifying the use of SAP HANA and UI5 for the testbed.
Keywords: ad hoc networks; routing protocols; telecommunication security; AODV protocol; OLSR protocol; SAP HANA; network security metrics; progressive testbed; real time ad hoc networks; sentient decision making; software defined networking; trust values; Ad hoc networks; Equations; Mathematical model; Measurement; Protocols; Routing; Security; Ad-Hoc Network; HANA- High Performance Analytic Appliance; Performance Analysis; Security Metrics; Trust Model; UI5 SAP User Interface Technology (ID#: 15-6054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906017&isnumber=6905967
Renchi Yan; Teng Xu; Potkonjak, M., "Semantic Attacks on Wireless Medical Devices," SENSORS, 2014 IEEE, vol., no., pp. 482, 485, 2-5 Nov. 2014. doi:10.1109/ICSENS.2014.6985040
Abstract: Security of medical embedded systems is of vital importance. Wireless medical devices used in wireless health applications employ large number of sensors and are in particular susceptible to security attacks. They are often not physically secured and are usually used in hostile environments. We have developed theoretical and statistical framework for creating semantic attacks where data is altered in such a way that the consequences include incorrect medical diagnosis and treatment. Our approach maps a semantic attack to an instance of optimization problem where medical damage is maximized under constraints of the probability of detection and root cause tracing. We use a popular medical shoe to demonstrate that low energy and low cost of embedded medical devices increases the probability of successful attacks. We have proposed two types of semantic attacks, respectively pressure-based attack, and time-based attack under two scenarios, a shoe with 99 pressure sensors and a shoe with 20 pressure sensors. We test the effects of the attacks and compare them. Our results indicate that it is surprisingly easy to attack several essential medical metrics and to alter corresponding medical diagnosis.
Keywords: biomedical communication; data communication; intelligent sensors; optimisation; pressure sensors; security of data; wireless sensor networks; detection probability; low cost embedded medical devices; low energy embedded medical devices; medical embedded system security; medical shoe; optimization problem; pressure based attack; pressure sensors; root cause tracing; semantic attacks; sensor security attacks; time based attack; wireless health applications; wireless medical devices; Measurement; Medical diagnostic imaging; Medical services; Security; Semantics; Sensors; Wireless sensor networks (ID#: 15-6055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6985040&isnumber=6984913
Riecker, M.; Thies, D.; Hollick, M., "Measuring the Impact of Denial-of-Service Attacks on Wireless Sensor Networks," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 296, 304, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925784
Abstract: Wireless sensor networks (WSNs) are especially susceptible to denial-of-service attacks due to the resource-constrained nature of motes. We follow a systematic approach to analyze the impacts of these attacks on the network behavior; therefore, we first identify a large number of metrics easily obtained and calculated without incurring too much overhead. Next, we statistically test these metrics to assess whether they exhibit significantly different values under attack when compared to those of the baseline operation. The metrics look into different aspects of the motes and the network, for example, MCU and radio activities, network traffic statistics, and routing related information. Then, to show the applicability of the metrics to different WSNs, we vary several parameters, such as traffic intensity and transmission power. We consider the most common topologies in wireless sensor networks such as central data collection and meshed multi-hop networks by using the collection tree and the mesh protocol. Finally, the metrics are grouped according to their capability of distinction into different classes. In this work, we focus on jamming and blackhole attacks. Our experiments reveal that certain metrics are able to detect a jamming attack on all motes in the testbed, irrespective of the parameter combination, and at the highest significance value. To illustrate these facts, we use a standard testbed consisting of the widely-employed TelosB motes.
Keywords: jamming; telecommunication network routing; telecommunication network topology; telecommunication security; wireless sensor networks; TelosB motes; blackhole attack; central data collection; collection tree; denial-of-service attack; jamming attack; mesh protocol; meshed multihop network; network behavior; network topology; network traffic statistics; routing related information; wireless sensor networks; Computer crime; Jamming; Measurement; Protocols; Routing; Topology; Wireless sensor networks; Denial-of-Service; Measurements; Wireless Sensor Networks (ID#: 15-6056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925784&isnumber=6925725
Kundi, M.; Chitchyan, R., "Position on Metrics for Security in Requirements Engineering," Requirements Engineering and Testing (RET), 2014 IEEE 1st International Workshop on, vol., no., pp. 29, 31, 26-26 Aug. 2014. doi:10.1109/RET.2014.6908676
Abstract: A number of well-established software quality metrics are in use in code testing. It is our position that for many code-testing metrics for security equivalent requirements level metrics should be defined. Such requirements-level security metrics should be used in evaluating the quality of software security early on, in order to ensure that the resultant software system possesses the required security characteristics and quality.
Keywords: formal specification; program testing; security of data; software metrics; software quality; code-testing metrics; requirements engineering; requirements-level security metrics; software quality metrics; software security; Conferences; Security; Software measurement; Software systems; Testing (ID#: 15-6057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908676&isnumber=6908666
Rostami, M.; Wendt, J.B.; Potkonjak, M.; Koushanfar, F., "Quo Vadis, PUF?: Trends and Challenges of Emerging Physical-Disorder Based Security," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, vol, no., pp. 1, 6, 24-28 March 2014. doi:10.7873/DATE.2014.365
Abstract: The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures.
Keywords: cryptographic protocols; public key cryptography; PUF-based paradigms; PUF-based primitives; Quo Vadis; application domain; digital PUF; hardware realization; physical medium randomness measurement; physical unclonable function; physical-disorder-based security; public-key protocol; secret-key based protocols; security primitive; structure analysis; subjective analysis; Aging; Correlation; Hardware; NIST; Protocols; Public key (ID#: 15-6058)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800566&isnumber=6800201
Singh, P.; Shivani, S.; Agarwal, S., "A Chaotic Map Based DCT-SVD Watermarking Scheme for Rightful Ownership Verification," Engineering and Systems (SCES), 2014 Students Conference on, vol., no., pp. 1, 4, 28-30 May 2014. doi:10.1109/SCES.2014.6880048
Abstract: A chaotic map-based hybrid watermarking scheme incorporating the concepts of the Discrete Cosine Transform (DCT) and exploiting the stability of the singular values has been proposed here. Homogeneity Analysis of the cover image has been done to chalk out appropriates sites for embedding and thereafter, a reference image has been obtained from it. The singular values of the reference image has been modified for embedding the secret information. The Chaotic map based scrambling enhances the security of the algorithm as only the rightful owner possessing the secret key, could retrieve the actual image. Comprehensive set of attacks has been applied and robustness tested with the Normalized Cross Correlation (NCC) and Peak Signal to Noise Ratio (PSNR) metric values. High values of these metrics signify the appropriateness of the proposed methodology.
Keywords: chaos; discrete cosine transforms; image retrieval; image watermarking; singular value decomposition; NCC; PSNR metric values; chaotic map based DCT-SVD hybrid watermarking scheme; chaotic map based scrambling; cover image; discrete cosine transform; homogeneity analysis; image retrieval; normalized cross correlation; peak signal to noise ratio metric values; reference image; rightful ownership verification; secret information; singular value decomposition; Discrete cosine transforms; Image coding; Measurement; PSNR; Robustness; Transform coding; Watermarking; Chaotic Map; Discrete cosine transformation (DCT); Homogeneity Analysis; Normalized Cross Correlation (NCC); Peak Signal to Noise Ratio (PSNR); Reference Image; Singular values (ID#: 15-6059)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880048&isnumber=6880039
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Polymorphic Worms, 2014 |
Polymorphic worms pose a serious threat to Internet security with their ability to rapidly propagate, exploit unknown vulnerabilities, and change their own representations on each new infection or encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm making their fingerprinting very difficult. Signature-based defenses and traditional security layers miss these stealthy and persistent threats. The research presented here identifies alternative methods for identifying and responding to these worms. All citations are from 2014.
Ali Zand, Giovanni Vigna, Xifeng Yan, Christopher Kruegel. “Extracting Probable Command and Control Signatures for Detecting Botnets.” SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March, 2014, Pages 1657-1662. doi:10.1145/2554850.2554896
Abstract: Botnets, which are networks of compromised machines under the control of a single malicious entity, are a serious threat to online security. The fact that botnets, by definition, receive their commands from a single entity can be leveraged to fight them. To this end, one requires techniques that can detect command and control (C&C) traffic, as well as the servers that host C&C services. Given the knowledge of a C&C server's IP address, one can use this information to detect all hosts that attempt to contact such a server, and subsequently disinfect, disable, or block the infected machines. This information can also be used by law enforcement to take down the C&C server. In this paper, we present a new botnet C&C signature extraction approach that can be used to find C&C communication in traffic generated by executing malware samples in a dynamic analysis system. This approach works in two steps. First, we extract all frequent strings seen in the network traffic. Second, we use a function that assigns a score to each string. This score represents the likelihood that the string is indicative of C&C traffic. This function allows us to rank strings and focus our attention on those that likely represent good C&C signatures. We apply our technique to almost 2.6 million network connections produced by running more than 1.4 million malware samples. Using our technique, we were able to automatically extract a set of signatures that are able to identify C&C traffic. Furthermore, we compared our signatures with those used by existing tools, such as Snort and BotHunter.
Keywords: (not provided) (ID#: 15-5967)
URL: http://doi.acm.org/10.1145/2554850.2554896
Shahid Alam, Issa Traore, Ibrahim Sogukpinar. “Current Trends and the Future of Metamorphic Malware Detection.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 411. doi:10.1145/2659651.2659670
Abstract: Dynamic binary obfuscation or metamorphism is a technique where a malware never keeps the same sequence of opcodes in the memory. This stealthy mutation technique helps a malware evade detection by today's signature-based anti-malware programs. This paper analyzes the current trends, provides future directions and reasons about some of the basic characteristics of a system for providing real-time detection of metamorphic malware. Our emphasis is on the most recent advancements and the potentials available in metamorphic malware detection, so we only cover some of the major academic research efforts carried out, including and after, the year 2006. The paper not only serves as a collection of recent references and information for easy comparison and analysis, but also as a motivation for improving the current and developing new techniques for metamorphic malware detection.
Keywords: End point security, Malware detection, Metamorphic malware, Obfuscations (ID#: 15-5968)
URL: http://doi.acm.org/10.1145/2659651.2659670
Hongyu Gao, Yi Yang, Kai Bu, Yan Chen, Doug Downey, Kathy Lee, Alok Choudhary. “Spam ain't as Diverse as It Seems: Throttling OSN Spam with Templates Underneath.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 76-85. doi:10.1145/2664243.2664251
Abstract: In online social networks (OSNs), spam originating from friends and acquaintances not only reduces the joy of Internet surfing but also causes damage to less security-savvy users. Prior countermeasures combat OSN spam from different angles. Due to the diversity of spam, there is hardly any existing method that can independently detect the majority or most of OSN spam. In this paper, we empirically analyze the textual pattern of a large collection of OSN spam. An inspiring finding is that the majority (63.0%) of the collected spam is generated with underlying templates. We therefore propose extracting templates of spam detected by existing methods and then matching messages against the templates toward accurate and fast spam detection. We implement this insight through Tangram, an OSN spam filtering system that performs online inspection on the stream of user-generated messages. Tangram automatically divides OSN spam into segments and uses the segments to construct templates to filter future spam. Experimental results show that Tangram is highly accurate and can rapidly generate templates to throttle newly emerged campaigns. Specifically, Tangram detects the most prevalent template-based spam with 95.7% true positive rate, whereas the existing template generation approach detects only 32.3%. The integration of Tangram and its auxiliary spam filter achieves an overall accuracy of 85.4% true positive rate and 0.33% false positive rate.
Keywords: online social networks, spam, spam campaigns (ID#: 15-5969)
URL: http://doi.acm.org/10.1145/2664243.2664251
Blake Anderson, Curtis Storlie, Micah Yates, Aaron McPhall. “Automating Reverse Engineering with Machine Learning Techniques.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 103-112. doi:10.1145/2666652.2666665
Abstract: Malware continues to be an ongoing threat, with millions of unique variants created every year. Unlike the majority of this malware, Advanced Persistent Threat (APT) malware is created to target a specific network or set of networks and has a precise objective, e.g. exfiltrating sensitive data. While 0-day malware detectors are a good start, they do not help the reverse engineers better understand the threats attacking their networks. Understanding the behavior of malware is often a time sensitive task, and can take anywhere between several hours to several weeks. Our goal is to automate the task of identifying the general function of the subroutines in the function call graph of the program to aid the reverse engineers. Two approaches to model the subroutine labels are investigated, a multiclass Gaussian process and a multiclass support vector machine. The output of these methods is the probability that the subroutine belongs to a certain class of functionality (e.g., file I/O, exploit, etc.). Promising initial results, illustrating the efficacy of this method, are presented on a sample of 201 subroutines taken from two malicious families.
Keywords: computer security, gaussian processes, machine learning, malware, multiple kernel learning, support vector machines (ID#: 15-5970)
URL: http://doi.acm.org/10.1145/2666652.2666665
Yiming Jing, Ziming Zhao, Gail-Joon Ahn, Hongxin Hu. “Morpheus: Automatically Generating Heuristics to Detect Android Emulators.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 216-225. doi:10.1145/2664243.2664250
Abstract: Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.
Keywords: Android, emulator, malware (ID#: 15-5971)
URL: http://doi.acm.org/10.1145/2664243.2664250
Jannik Pewny, Felix Schuster, Lukas Bernhard, Thorsten Holz, Christian Rossow. “Leveraging Semantic Signatures for Bug Search in Binary Programs.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 406-415. doi:10.1145/2664243.2664269
Abstract: Software vulnerabilities still constitute a high security risk and there is an ongoing race to patch known bugs. However, especially in closed-source software, there is no straightforward way (in contrast to source code analysis) to find buggy code parts, even if the bug was publicly disclosed. To tackle this problem, we propose a method called Tree Edit Distance Based Equational Matching (TEDEM) to automatically identify binary code regions that are "similar" to code regions containing a reference bug. We aim to find bugs both in the same binary as the reference bug and in completely unrelated binaries (even compiled for different operating systems). Our method even works on proprietary software systems, which lack source code and symbols. The analysis task is split into two phases. In a preprocessing phase, we condense the semantics of a given binary executable by symbolic simplification to make our approach robust against syntactic changes across different binaries. Second, we use tree edit distances as a basic block-centric metric for code similarity. This allows us to find instances of the same bug in different binaries and even spotting its variants (a concept called vulnerability extrapolation). To demonstrate the practical feasibility of the proposed method, we implemented a prototype of TEDEM that can find real-world security bugs across binaries and even across OS boundaries, such as in MS Word and the popular messengers Pidgin (Linux) and Adium (Mac OS).
Keywords: (not provided) (ID#: 15-5972)
URL: http://doi.acm.org/10.1145/2664243.2664269
Yaniv David, Eran Yahav.; “Tracelet-Based Code Search in Executables.” PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 349-360. doi:10.1145/2594291.2594343
Abstract: We address the problem of code search in executables. Given a function in binary form and a large code base, our goal is to statically find similar functions in the code base. Towards this end, we present a novel technique for computing similarity between functions. Our notion of similarity is based on decomposition of functions into tracelets: continuous, short, partial traces of an execution. To establish tracelet similarity in the face of low-level compiler transformations, we employ a simple rewriting engine. This engine uses constraint solving over alignment constraints and data dependencies to match registers and memory addresses between tracelets, bridging the gap between tracelets that are otherwise similar. We have implemented our approach and applied it to find matches in over a million binary functions. We compare tracelet matching to approaches based on n-grams and graphlets and show that tracelet matching obtains dramatically better precision and recall.
Keywords: static binary analysis, x86, x86-64 (ID#: 15-5973)
URL: http://doi.acm.org/10.1145/2594291.2594343
Yinzhi Cao, Xiang Pan, Yan Chen, Jianwei Zhuge. “JShield: Towards Real-Time and Vulnerability-Based Detection of Polluted Drive-By Download Attacks.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 466-475. doi:10.1145/2664243.2664256
Abstract: Drive-by download attacks, which exploit vulnerabilities of web browsers to control client computers, have become a major venue for attackers. To detect such attacks, researchers have proposed many approaches such as anomaly-based [22, 23] and vulnerability-based [44, 50] detections. However, anomaly-based approaches are vulnerable to data pollution, and existing vulnerability-based approaches cannot accurately describe the vulnerability condition of all the drive-by download attacks. In this paper, we propose a vulnerability-based approach, namely JShield, which uses novel opcode vulnerability signature, a deterministic finite automaton (DFA) with a variable pool at opcode level, to match drive-by download vulnerabilities. We investigate all the JavaScript engine vulnerabilities of web browsers from 2009 to 2014, as well as those of portable document files (PDF) readers from 2007 to 2014. JShield is able to match all of those vulnerabilities; furthermore, the overall evaluation shows that JShield is so lightweight that it only adds 2.39 percent of overhead to original execution as the median among top 500 Alexa web sites.
Keywords: (not provided) (ID#: 15-5974)
URL: http://doi.acm.org/10.1145/2664243.2664256
Smita Naval, Vijay Laxmi, Neha Gupta, Manoj Singh Gaur, Muttukrishnan Rajarajan. “Exploring Worm Behaviors using DTW.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 379. doi:10.1145/2659651.2659737
Abstract: Worms are becoming a potential threat to Internet users across the globe. The financial damages due to computer worms increased significantly in past few years. Analyzing these hazardous worm attacks has become a crucial issue to be addressed. Given the fact that worm analysts would prefer to analyze classes of worms rather than individual files, their task will be significantly reduced. In this paper, we have proposed a dynamic host-based worm categorization approach to segregate worms. These groups indicate that worm samples constitute different behavior according to their infection and anti-detection vectors. Our proposed approach utilizes system-call traces and computes a distance matrix using Dynamic Time Warping (DTW) algorithm to form these groups. In conjunction to that, the proposed approach also discriminates worm and benign executables. The constructed model is further evaluated with unknown instances of real-world worms.
Keywords: Behavior Monitoring, DTW, System-calls (ID#: 15-5975)
URL: http://doi.acm.org/10.1145/2659651.2659737
Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, Fabio Roli. “Poisoning Behavioral Malware Clustering.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 27-36. doi:10.1145/2666652.2666666
Abstract: Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Keywords: adversarial machine learning, clustering, computer security, malware detection, security evaluation, unsupervised learning (ID#: 15-5976)
URL: http://doi.acm.org/10.1145/2666652.2666666
Shahid Alam, Ibrahim Sogukpinar, Issa Traore, Yvonne Coady. “In-Cloud Malware Analysis and Detection: State of the Art.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 473. doi:10.1145/2659651.2659730
Abstract: With the advent of Internet of Things, we are facing another wave of malware attacks, that encompass intelligent embedded devices. Because of the limited energy resources, running a complete malware detector on these devices is quite challenging. There is a need to devise new techniques to detect malware on these devices. Malware detection is one of the services that can be provided as an in-cloud service. This paper reviews current such systems, discusses there pros and cons, and recommends an improved in-cloud malware analysis and detection system. We introduce a new three layered hybrid system with a lightweight antimalware engine. These features can provide faster malware detection response time, shield the client from malware and reduce the bandwidth between the client and the cloud, compared to other such systems. The paper serves as a motivation for improving the current and developing new techniques for in-cloud malware analysis and detection system.
Keywords: Cloud computing, In-cloud services, Malware analysis, Malware detection (ID#: 15-5977)
URL: http://doi.acm.org/10.1145/2659651.2659730
M. Zubair Rafique, Ping Chen, Christophe Huygens, Wouter Joosen. “Evolutionary Algorithms for Classification of Malware Families Through Different Network Behaviors.” GECCO '14 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 1167-1174. doi:10.1145/2576768.2598238
Abstract: The staggering increase of malware families and their diversity poses a significant threat and creates a compelling need for automatic classification techniques. In this paper, we first analyze the role of network behavior as a powerful technique to automatically classify malware families and their polymorphic variants. Afterwards, we present a framework to efficiently classify malware families by modeling their different network behaviors (such as HTTP, SMTP, UDP, and TCP). We propose protocol-aware and state-space modeling schemes to extract features from malware network behaviors. We analyze the applicability of various evolutionary and non-evolutionary algorithms for our malware family classification framework. To evaluate our framework, we collected a real-world dataset of 6,000 unique and active malware samples belonging to 20 different malware families. We provide a detailed analysis of network behaviors exhibited by these prevalent malware families. The results of our experiments shows that evolutionary algorithms, like sUpervised Classifier System (UCS), can effectively classify malware families through different network behaviors in real-time. To the best of our knowledge, the current work is the first malware classification framework based on evolutionary classifier that uses different network behaviors.
Keywords: machine learning, malware classification, network behaviors (ID#: 15-5978)
URL: http://doi.acm.org/10.1145/2576768.2598238
Luke Deshotels, Vivek Notani, Arun Lakhotia. “DroidLegacy: Automated Familial Classification of Android Malware.” PPREW'14 Proceedings of ACM SIGPLAN on Program Protection and Reverse Engineering Workshop, January 2014, Article No. 3. doi:10.1145/2556464.2556467
Abstract: We present an automated method for extracting familial signatures for Android malware, i.e., signatures that identify malware produced by piggybacking potentially different benign applications with the same (or similar) malicious code. The APK classes that constitute malware code in a repackaged application are separated from the benign code and the Android API calls used by the malicious modules are extracted to create a signature. A piggybacked malicious app can be detected by first decomposing it into loosely coupled modules and then matching the Android API calls called by each of the modules against the signatures of the known malware families. Since the signatures are based on Android API calls, they are related to the core malware behavior, and thus are more resilient to obfuscations. In triage, AV companies need to automatically classify large number of samples so as to optimize assignment of human analysts. They need a system that gives low false negatives even if it is at the cost of higher false positives. Keeping this goal in mind, we fine tuned our system and used standard 10 fold cross validation over a dataset of 1,052 malicious APKs and 48 benign APKs to verify our algorithm. Results show that we have 94% accuracy, 97% precision, and 93% recall when separating benign from malware. We successfully classified our entire malware dataset into 11 families with 98% accuracy, 87% precision, and 94% recall.
Keywords: Android malware, class dependence graphs, familial classification, malware detection, module generation, piggybacked malware, signature generation, static analysis (ID#: 15-5979)
URL: http://doi.acm.org/10.1145/2556464.2556467
Ashish Saini, Ekta Gandotra, Divya Bansal, Sanjeev Sofat. “Classification of PE Files using Static Analysis.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 429. doi:10.1145/2659651.2659679
Abstract: Malware is one of the most terrible and major security threats facing the Internet today. Anti-malware vendors are challenged to identify, classify and counter new malwares due to the obfuscation techniques being used by malware authors. In this paper, we present a simple, fast and scalable method of differentiating malwares from cleanwares on the basis of features extracted from Windows PE files. The features used in this work are Suspicious Section Count and Function Call Frequency. After automatically extracting features of executables, we use machine learning algorithms available in WEKA library to classify them into malwares and cleanwares. Our experimental results provide an accuracy of over 98% for a data set of 3,087 executable files including 2,460 malwares and 627 cleanwares. Based on the results obtained, we conclude that the Function Call Frequency feature derived from the static analysis method plays a significant role in distinguishing malware files from benign ones.
Keywords: Classification, Machine Learning, Static Malware Analysis (ID#: 15-5980)
URL: http://doi.acm.org/10.1145/2659651.2659679
Ekta Gandotra, Divya Bansal, Sanjeev Sofat. “Integrated Framework for Classification of Malwares.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 417. doi:10.1145/2659651.2659738
Abstract: Malware is one of the most terrible and major security threats facing the Internet today. It is evolving, becoming more sophisticated and using new ways to target computers and mobile devices. The traditional defences like antivirus softwares typically rely on signature based methods and are unable to detect previously unseen malwares. Machine learning approaches have been adopted to classify malwares based on the features extracted using static or dynamic analysis. Both type of malware analysis have their pros and cons. In this paper, we propose a classification framework which uses integration of both static and dynamic features for distinguishing malwares from clean files. A real world corpus of recent malwares is used to validate the proposed approach. The experimental results, based on a dataset of 998 malwares and 428 cleanware files provide an accuracy of 99.58% indicating that the hybrid approach enhances the accuracy rate of malware detection and classification over the results obtained when these features are considered separately.
Keywords: Classification, Dynamic Analysis, Machine Learning, Malware, Static Analysis (ID#: 15-5981)
URL: http://doi.acm.org/10.1145/2659651.2659738
Jing Qiu, Babak Yadegari, Brian Johannesmeyer, Saumya Debray, Xiaohong Su. “A Framework for Understanding Dynamic Anti-Analysis Defenses.” PPREW-4 Proceedings of the 4th Program Protection and Reverse Engineering Workshop, December 2014, Article No. 2. doi:10.1145/2689702.2689704
Abstract: Malicious code often use a variety of anti-analysis and anti-tampering defenses to hinder analysis. Researchers trying to understand the internal logic of the malware have to penetrate these defenses. Existing research on such anti-analysis defenses tends to study them in isolation, thereby failing to see underlying conceptual similarities between different kinds of anti-analysis defenses. This paper proposes an information-flow-based framework that encompasses a wide variety of anti-analysis defenses. We illustrate the utility of our approach using two different instances of this framework: self-checksumming-based anti-tampering defenses and timing-based emulator detection. Our approach can provide insights into the underlying structure of various anti-analysis defenses and thereby help devise techniques for neutralizing them.
Keywords: Anti-analysis Defense, Self-checksumming, Taint analysis, Timing defense (ID#: 15-5983)
URL: http://doi.acm.org/10.1145/2689702.2689704
Mordechai Guri, Gabi Kedma, Buky Carmeli, Yuval Elovici. “Limiting Access to Unintentionally Leaked Sensitive Documents Using Malware Signatures.” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 129-140. doi:10.1145/2613087.2613103
Abstract: Organizations are repeatedly embarrassed when their sensitive digital documents go public or fall into the hands of adversaries, often as a result of unintentional or inadvertent leakage. Such leakage has been traditionally handled either by preventive means, which are evidently not hermetic, or by punitive measures taken after the main damage has already been done. Yet, the challenge of preventing a leaked file from spreading further among computers and over the Internet is not resolved by existing approaches. This paper presents a novel method, which aims at reducing and limiting the potential damage of a leakage that has already occurred. The main idea is to tag sensitive documents within the organization's boundaries by attaching a benign detectable malware signature (DMS). While the DMS is masked inside the organization, if a tagged document is somehow leaked out of the organization's boundaries, common security services such as Anti-Virus (AV) programs, firewalls or email gateways will detect the file as a real threat and will consequently delete or quarantine it, preventing it from spreading further. This paper discusses various aspects of the DMS, such as signature type and attachment techniques, along with proper design considerations and implementation issues. The proposed method was implemented and successfully tested on various file types including documents, spreadsheets, presentations, images, executable binaries and textual source code. The evaluation results have demonstrated its effectiveness in limiting the spread of leaked documents.
Keywords: anti-virus program, data leakage, detectable malware signature, sensitive document (ID#: 15-5984)
URL: http://doi.acm.org/10.1145/2613087.2613103
Mu Zhang, Yue Duan, Heng Yin, Zhiruo Zhao. “Semantics-Aware Android Malware Classification Using Weighted Contextual API Dependency Graphs.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1105-1116. doi:10.1145/2660267.2660359
Abstract: The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\%) and an acceptable false positive rate (5.15\%) for a vetting purpose.
Keywords: android, anomaly detection, graph similarity, malware classification, semantics-aware, signature detection (ID#: 15-5985)
URL: http://doi.acm.org/10.1145/2660267.2660359
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Searchable Encryption 2014 |
The phrase “searchable encryption” deals with the problems related to protecting privacy while concurrently allowing for searches within data, particularly in the cloud. The research presented here addresses several approaches. All of the research cited here was presented in 2014.
Florian Hahn, Florian Kerschbaum; “Searchable Encryption with Secure and Efficient Updates,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 310-320. doi:10.1145/2660267.2660297
Abstract: Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient, i.e. have sub-linear search time, dynamic, i.e. allow updates, and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
Keywords: dynamic searchable encryption, searchable encryption, secure index, update (ID#: 15-6102)
URL: http://doi.acm.org/10.1145/2660267.2660297
Gabriel Ghinita, Razvan Rughinis; “An Efficient Privacy-Preserving System for Monitoring Mobile Users: Making Searchable Encryption Practical,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 321-332. doi:10.1145/2557547.2557559
Abstract: Monitoring location updates from mobile users has important applications in several areas, ranging from public safety and national security to social networks and advertising. However, sensitive information can be derived from movement patterns, so protecting the privacy of mobile users is a major concern. Users may only be willing to disclose their locations when some condition is met, for instance in proximity of a disaster area, or when an event of interest occurs nearby. Currently, such functionality is achieved using searchable encryption. Such cryptographic primitives provide provable guarantees for privacy, and allow decryption only when the location satisfies some predicate. Nevertheless, they rely on expensive pairing-based cryptography (PBC), and direct application to the domain of location updates leads to impractical solutions. We propose secure and efficient techniques for private processing of location updates that complement the use of PBC and lead to significant gains in performance by reducing the amount of required pairing operations. We also implement two optimizations that further improve performance: materialization of results to expensive mathematical operations, and parallelization. Extensive experimental results show that the proposed techniques significantly improve performance compared to the baseline, and reduce the searchable encryption overhead to a level that is practical in a computing environment with reasonable resources, such as the cloud.
Keywords: location privacy, pairing-based cryptography (ID#: 15-6103)
URL: http://doi.acm.org/10.1145/2557547.2557559
Dalia Khader; “Attribute Based Search in Encrypted Data: ABSE,” WISCS ’14 Proceedings of the 2014 ACM Workshop on Information Sharing & Collaborative Security, November 2014, Pages 31-40. doi:10.1145/2663876.2663878
Abstract: Searchable encryption enables users to delegate search functionalities to third-parties without giving them the ability to decrypt. Existing schemes assume that the sender knows the identity of the receiver. In this paper we relax this assumption by proposing the first Attribute Based Searchable Encryption Scheme (ABSE). An ABSE is a type of public key encryption with keyword search that allows the user encrypting the data to specify a policy that determines, among the users of the system, who is eligible to decrypt and search the data. Each user of the system owns a set of attributes and the policy is a function of these attributes expressed as a predicate. Only members who own sufficient attributes to satisfy that policy can send the server a valid search query. In our work we introduce the concept of a secure ABSE by defining the functionalities and the relevant security notions such as correctness, chosen keyword attacks, and attribute forgeability attacks. Our definitions are based on provable security formalizations. We further propose a secure construction of an ABSE based on bilinear maps. We illustrate the use of our proposed scheme in a shared storage for medical records.
Keywords: attribute based systems, public key cryptography, searchable encryption (ID#: 15-6104)
URL: http://doi.acm.org/10.1145/2663876.2663878
Mehmet Kuzu, Mohammad Saiful Islam, Murat Kantarcioglu; “Efficient Privacy-Aware Search over Encrypted Databases,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 249-256. doi:10.1145/2557547.2557570
Abstract: In recent years, database as a service (DAS) model where data management is outsourced to cloud service providers has become more prevalent. Although DAS model offers lower cost and flexibility, it necessitates the transfer of potentially sensitive data to untrusted cloud servers. To ensure the confidentiality, encryption of sensitive data before its transfer to the cloud emerges as an important option. Encrypted storage provides protection but it complicates data processing including crucial selective record retrieval. To achieve selective retrieval over encrypted collection, considerable amount of searchable encryption schemes have been proposed in the literature with distinct privacy guarantees. Among the available approaches, oblivious RAM based ones offer optimal privacy. However, they are computationally intensive and do not scale well to very large databases. On the other hand, almost all efficient schemes leak some information, especially data access pattern to the remote servers. Unfortunately, recent evidence on access pattern leakage indicates that adversary’s background knowledge could be used to infer the contents of the encrypted data and may potentially endanger individual privacy. In this paper, we introduce a novel construction for practical and privacy-aware selective record retrieval over encrypted databases. Our approach leaks obfuscated access pattern to enable efficient retrieval while ensuring individual privacy. Applied obfuscation is based on differential privacy which provides rigorous individual privacy guarantees against adversaries with arbitrary background knowledge.
Keywords: differential privacy, searchable encryption, security (ID#: 15-6105)
URL: http://doi.acm.org/10.1145/2557547.2557570
Zhangjie Fu, Jiangang Shu, Xingming Sun, Daxing Zhang; “Semantic Keyword Search Based on Tree over Encrypted Cloud Data,” SCC ’14 Proceedings of the 2nd International Workshop on Security in Cloud Computing, June 2014, Pages 59-62. doi:10.1145/2600075.2600081
Abstract: Searchable encryption is a good solution to search over encrypted cloud data in cloud computing. However, most of existing searchable encryption schemes only support exact keyword search. That means they don’t support searching for different variants of the query word, which is a significant drawback and greatly affects data usability and user experience. In this paper, we formalize the problem of semantic keyword-based search over encrypted cloud data while preserving privacy. Semantic keyword-based search will greatly improves the user experience by returning all the documents containing semantically close keywords related to the query word. In our solution, we use the stemming algorithm to construct stem set, which reduces the dimension of index. And the symbol-based tree is also adopted in index construction to improve the search efficiency. Through rigorous privacy analysis and experiment on real dataset, our scheme is secure and efficient.
Keywords: cloud computing, searchable encryption, semantic search, stemming algorithm (ID#: 15-6106)
URL: http://doi.acm.org/10.1145/2600075.2600081
Boyang Wang, Yantian Hou, Ming Li, Haitao Wang, Hui Li; “Maple: Scalable Multi-Dimensional Range Search over Encrypted Cloud Data with Tree-Based Index,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 111-122. doi:10.1145/2590296.2590305
Abstract: Cloud computing promises users massive scale outsourced data storage services with much lower costs than traditional methods. However, privacy concerns compel sensitive data to be stored on the cloud server in an encrypted form. This posts a great challenge for effectively utilizing cloud data, such as executing common SQL queries. A variety of searchable encryption techniques have been proposed to solve this issue; yet efficiency and scalability are still the two main obstacles for their adoptions in real-world datasets, which are multi-dimensional in general. In this paper, we propose a tree-based public-key Multi-Dimensional Range Searchable Encryption (MDRSE) to overcome the above limitations. Specifically, we first formally define the leakage function and security of a tree-based MDRSE. Then, by leveraging an existing predicate encryption in a novel way, our tree-based MDRSE efficiently indexes and searches over encrypted cloud data with multi-dimensional tree structures (i.e., R-trees). Moreover, our scheme is able to protect single-dimensional privacy while previous efficient solutions fail to achieve. Our scheme is selectively secure, and through extensive experimental evaluation on a large-scale real-world dataset, we show the efficiency and scalability of our scheme.
Keywords: encrypted cloud data, multiple dimension, range search, tree structures (ID#: 15-6107)
URL: http://doi.acm.org/10.1145/2590296.2590305
Florian Kerschbaum; “Client-Controlled Cloud Encryption,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1542-1543. doi:10.1145/2660267.2660577
Abstract: Customers of cloud service demand control over their data. Next to threats to intellectual property, legal requirements and risks, such as data protection compliance or the possibility of a subpoena of the cloud service provider, also pose restrictions. A commonly proposed and implemented solution is to encrypt the data on the client and retain the key at the client. In this tutorial we will review: the available encryption methods, such deterministic, order-preserving, homomorphic, searchable (functional) encryption and secure multi-party computation; possible attacks on currently deployed systems like dictionary and frequency attacks; architectures integrating these solutions into SaaS and PaaS (DBaaS) applications.
Keywords: cloud, encryption, tutorial (ID#: 15-6108)
URL: http://doi.acm.org/10.1145/2660267.2660577
David McGrew; “Privacy vs. Efficacy in Cloud-based Threat Detection,” CCSW ’14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 3-4. doi:10.1145/2664168.2664183
Abstract: Advanced threats can be detected by monitoring information systems and networks, then applying advanced analytic techniques to the data thus gathered. It is natural to gather, store, and analyze this data in the Cloud, but doing so introduces significant privacy concerns. There are technologies that can protect privacy to some extent, but these technologies reduce the efficacy of threat analytics and forensics, and introduce computation and communication overhead. This talk considers the tension between privacy and efficacy in Cloud threat detection, and analyzes both pragmatic techniques such as data anonymization via deterministic encryption and differential privacy as well as interactive techniques such as private set intersection and searchable encryption, and highlights areas where further research is needed.
Keywords: cloud, privacy, threat monitoring (ID#: 15-6109)
URL: http://doi.acm.org/10.1145/2664168.2664183
Florian Kerschbaum, Axel Schroepfer; “Optimal Average-Complexity Ideal-Security Order-Preserving Encryption,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 275-286. doi:10.1145/2660267.2660277
Abstract: Order-preserving encryption enables performing many classes of queries—including range queries—on encrypted databases. Popa et al. recently presented an ideal-secure order-preserving encryption (or encoding) scheme, but their cost of insertions (encryption) is very high. In this paper we present an also ideal-secure, but significantly more efficient order-preserving encryption scheme. Our scheme is inspired by Reed’s referenced work on the average height of random binary search trees. We show that our scheme improves the average communication complexity from O(n log n) to O(n) under uniform distribution. Our scheme also integrates efficiently with adjustable encryption as used in CryptDB. In our experiments for database inserts we achieve a performance increase of up to 81% in LANs and 95% in WANs.
Keywords: adjustable encryption, efficiency, ideal security, in-memory column database, indistinguishability, order-preserving encryption (ID#: 15-6110)
URL: http://doi.acm.org/10.1145/2660267.2660277
Andreas Schaad, Anis Bkakria, Florian Keschbaum, Frederic Cuppens, Nora Cuppens-Boulahia, David Gross-Amblard; “Optimized and Controlled Provisioning of Encrypted Outsourced Data,” SACMAT ’14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 141-152. doi:10.1145/2613087.2613100
Abstract: Recent advances in encrypted outsourced databases support the direct processing of queries on encrypted data. Depending on functionality (i.e. operators) required in the queries the database has to use different encryption schemes with different security properties. Next to these functional requirements a security administrator may have to address security policies that may equally determine the used encryption schemes. We present an algorithm and tool set that determines an optimal balance between security and functionality as well as helps to identify and resolve possible conflicts. We test our solution on a database benchmark and business-driven security policies.
Keywords: encrypted database, encryption algorithm, policy configuration (ID#: 15-6111)
URL: http://doi.acm.org/10.1145/2613087.2613100
Yitao Duan; “Distributed Key Generation for Encrypted Deduplication: Achieving the Strongest Privacy,” CCSW ’14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 57-68. doi:10.1145/2664168.2664169
Abstract: Large-scale cloud storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent Encryption [27], and evolved into Message Locked Encryption [13] and the latest DupLESS architecture [12], strives to solve this problem. DupLESS relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first introduce a new security notion appropriate for the setting of deduplication and show that it is strictly stronger than all relevant notions. We then provide a rigorous proof of security against this notion, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm. We also introduce a distributed protocol that eliminates the need for the key server. This not only provides better protection but also allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.
Keywords: cloud computing security, deduplication, deterministic encryption (ID#: 15-6112)
URL: http://doi.acm.org/10.1145/2664168.2664169
Warren He, Devdatta Akhawe, Sumeet Jain, Elaine Shi, Dawn Song; “ShadowCrypt: Encrypted Web Applications for Everyone,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1028-1039. doi:10.1145/2660267.2660326
Abstract: A number of recent research and industry proposals discussed using encrypted data in web applications. We first present a systematization of the design space of web applications and highlight the advantages and limitations of current proposals. Next, we present ShadowCrypt, a previously unexplored design point that enables encrypted input/output without trusting any part of the web applications. ShadowCrypt allows users to transparently switch to encrypted input/output for text-based web applications. ShadowCrypt runs as a browser extension, replacing input elements in a page with secure, isolated shadow inputs and encrypted text with secure, isolated cleartext. ShadowCrypt’s key innovation is the use of Shadow DOM, an upcoming primitive that allows low-overhead isolation of DOM trees. Evaluation results indicate that ShadowCrypt has low overhead and of practical use today. Finally, based on our experience with ShadowCrypt, we present a study of 17 popular web applications, across different domains, and the functionality impact and security advantages of encrypting the data they handle.
Keywords: privacy, shadow dom, web security (ID#: 15-6113)
URL: http://doi.acm.org/10.1145/2660267.2660326
Michael Herrmann, Alfredo Rial, Claudia Diaz, Bart Preneel; “Practical Privacy-Preserving Location-Sharing Based Services with Aggregate Statistics,” WiSec ’14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks, July 2014, Pages 87-98. doi:10.1145/2627393.2627414
Abstract: Location-sharing-based services (LSBSs) allow users to share their location with their friends in a sporadic manner. In currently deployed LSBSs users must disclose their location to the service provider in order to share it with their friends. This default disclosure of location data introduces privacy risks. We define the security properties that a privacy-preserving LSBS should fulfill and propose two constructions. First, a construction based on identity based broadcast encryption (IBBE) in which the service provider does not learn the user’s location, but learns which other users are allowed to receive a location update. Second, a construction based on anonymous IBBE in which the service provider does not learn the latter either. As advantages with respect to previous work, in our schemes the LSBS provider does not need to perform any operations to compute the reply to a location data request, but only needs to forward IBBE ciphertexts to the receivers. We implement both constructions and present a performance analysis that shows their practicality. Furthermore, we extend our schemes such that the service provider, performing some verification work, is able to collect privacy-preserving aggregate statistics on the locations users share with each other.
Keywords: broadcast encryption, location privacy, vector commitments (ID#: 15-6114)
URL: http://doi.acm.org/10.1145/2627393.2627414
Aikaterina Latsiou, Panagiotis Rizomiliotis; “The Rainy Season of Cryptography,” PCI ’14 Proceedings of the 18th Panhellenic Conference on Informatics, October 2014, Pages 1-6. doi:10.1145/2645791.2645798
Abstract: Cloud Computing (CC) is the new trend in computing and resource management, an architectural shift towards thin clients and conveniently centralized provision of computing and networking resources. Worldwide cloud services revenue reached 148.8 billion in 2014. However, CC introduces security risks that the clients of the cloud have to deal with. More precisely, there are many security concerns related to outsourcing storage and computation to the cloud and these are mainly attributed to the fact that the clients do not have direct control over the systems that process their data. In this paper, we investigate the new challenges that cryptography faces in the CC era. We introduce a security framework for analysing these challenges, and we describe the cryptographic techniques that have been proposed until now. Finally, we provide a list of open problems and we propose new directions for research.
Keywords: Cloud Computing, Cryptography, Outsourcing (ID#: 15-6115)
URL: http://doi.acm.org/10.1145/2645791.2645798
Hu Chun, Yousef Elmehdwi, Feng Li, Prabir Bhattacharya, Wei Jiang; “Outsourceable Two-Party Privacy-Preserving Biometric Authentication,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 401-412. doi:10.1145/2590296.2590343
Abstract: Biometric authentication, a key component for many secure protocols and applications, is a process of authenticating a user by matching her biometric data against a biometric database stored at a server managed by an entity. If there is a match, the user can log into her account or obtain the services provided by the entity. Privacy-preserving biometric authentication (PPBA) considers a situation where the biometric data are kept private during the authentication process. That is the user’s biometric data record is never disclosed to the entity, and the data stored in the entity’s biometric database are never disclosed to the user. Due to the reduction in operational costs and high computing power, it is beneficial for an entity to outsource not only its data but also computations such as biometric authentication process to a cloud. However, due to well-documented security risks faced by a cloud, sensitive data like biometrics should be encrypted first and then outsourced to the cloud. When the biometric data are encrypted and cannot be decrypted by the cloud, the existing PPBA protocols are not applicable. Therefore, in this paper, we propose a two-party PPBA protocol when the biometric data in consideration are fully encrypted and outsourced to a cloud. In the proposed protocol, the security of the biometric data is completely protected since the encrypted biometric data are never decrypted during the authentication process. In addition, we formally analyze the security of the proposed protocol and provide extensive empirical results to show its runtime complexity.
Keywords: biometric authentication, cloud computing, security (ID#: 15-6116)
URL: http://doi.acm.org/10.1145/2590296.2590343
Hua Deng, Qianhong Wu, Bo Qin, Sherman S.M. Chow, Josep Domingo-Ferrer, Wenchang Shi; “Tracing and Revoking Leaked Credentials: Accountability in Leaking Sensitive Outsourced Data,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 425-434. doi:10.1145/2590296.2590342
Abstract: Most existing proposals for access control over outsourced data mainly aim at guaranteeing that the data are only accessible to authorized requestors who have the access credentials. This paper proposes TRLAC, an a posteriori approach for tracing and revoking leaked credentials, to complement existing a priori solutions. The tracing procedure of TRLAC can trace, in a black-box manner, at least one traitor who illegally distributed a credential, without any help from the cloud service provider. Once the dishonest users have been found, a revocation mechanism can be called to deprive them of access rights. We formally prove the security of TRLAC, and empirically shows that the introduction of the tracing feature incurs little costs to outsourcing.
Keywords: access control, accountability, broadcast encryption, cloud computing, data security, leakage, tracing (ID#: 15-6117)
URL: http://doi.acm.org/10.1145/2590296.2590342
Mohammad Saiful Islam, Mehmet Kuzu, Murat Kantarcioglu; “Inference Attack Against Encrypted Range Queries on Outsourced Databases,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, pages 235-246. doi:10.1145/2557547.2557561
Abstract: To mitigate security concerns of outsourced databases, quite a few protocols have been proposed that outsource data in encrypted format and allow encrypted query execution on the server side. Among the more practical protocols, the “bucketization” approach facilitates query execution at the cost of reduced efficiency by allowing some false positives in the query results. Precise Query Protocols (PQPs), on the other hand, enable the server to execute queries without incurring any false positives. Even though these protocols do not reveal the underlying data, they reveal query access pattern to an adversary. In this paper, we introduce a general attack on PQPs based on access pattern disclosure in the context of secure range queries. Our empirical analysis on several real world datasets shows that the proposed attack is able to disclose significant amount of sensitive data with high accuracy provided that the attacker has reasonable amount of background knowledge. We further demonstrate that a slight variation of such an attack can also be used on imprecise protocols (e.g., bucketization) to disclose significant amount of sensitive information.
Keywords: database-as-a-service, encrypted range query, inference attack (ID#: 15-6118)
URL: http://doi.acm.org/10.1145/2557547.2557561
Matteo Maffei, Giulio Malavolta, Manuel Reinert, Dominique Schröder; “Brief Announcement: Towards Security and Privacy for Outsourced Data in the Multi-Party Setting,” PODC ’14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 144-146. doi:10.1145/2611462.2611508
Abstract: Cloud storage has rapidly acquired popularity among users, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. This technology, however, puts user data in the direct control of cloud service providers, which raises increasing security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. We present GORAM, a cryptographic system that protects the secrecy and integrity of the data outsourced to an untrusted server and guarantees the anonymity and unlinkability of consecutive accesses to such data. GORAM allows the database owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. Technically, GORAM builds on a combination of ORAM to conceal data accesses, attribute-based encryption to rule the access to outsourced data, and zero-knowledge proofs to prove read and write permissions in a privacy-preserving manner. We implemented GORAM and conducted an experimental evaluation to demonstrate its feasibility.
Keywords: GORAM, ORAM, cloud storage, oblivious ram, privacy-enhancing technologies (ID#: 15-6119)
URL: http://doi.acm.org/10.1145/2611462.2611508
Paul Weiser, Simon Scheider; “A Civilized Cyberspace for Geoprivacy,” GeoPrivacy ’14 Proceedings of the 1st ACM SIGSPATIAL International Workshop on Privacy in Geographic Information Collection and Analysis, November 2014, Article No. 5. doi:10.1145/2675682.2676396
Abstract: We argue that current technical and legal attempts aimed at protecting Geoprivacy are insufficient. We propose a novel 2-dimensional model of privacy, which we term “civilized cyberspace.” On one dimension there are engineering, social and legal tools while on the other there are different kinds of interaction with information. We argue why such a civilized cyberspace protects privacy without sacrificing personal freedom on the one hand and opportunities for businesses on the other. We also discuss its realization and propose a technology stack including a permission service for geoprocessing.
Keywords: geoprivacy, geoprocessing, licensing, privacy model (ID#: 15-6120)
URL: http://doi.acm.org/10.1145/2675682.2676396
Xiao Shaun Wang, Kartik Nayak, Chang Liu, T-H. Hubert Chan, Elaine Shi, Emil Stefanov, Yan Huang; “Oblivious Data Structures,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 215-226. doi:10.1145/2660267.2660314
Abstract: We design novel, asymptotically more efficient data structures and algorithms for programs whose data access patterns exhibit some degree of predictability. To this end, we propose two novel techniques, a pointer-based technique and a locality-based technique. We show that these two techniques are powerful building blocks in making data structures and algorithms oblivious. Specifically, we apply these techniques to a broad range of commonly used data structures, including maps, sets, priority-queues, stacks, deques; and algorithms, including a memory allocator algorithm, max-flow on graphs with low doubling dimension, and shortest-path distance queries on weighted planar graphs. Our oblivious counterparts of the above outperform the best known ORAM scheme both asymptotically and in practice.
Keywords: cryptography, oblivious algorithms, security (ID#: 15-6121)
URL: http://doi.acm.org/10.1145/2660267.2660314
Jinsheng Zhang, Wensheng Zhang, Daji Qiao; “S-ORAM: a Segmentation-based Oblivious RAM,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 147-158. doi:10.1145/2590296.2590323
Abstract: As outsourcing data to remote storage servers gets popular, protecting user’s pattern in accessing these data has become a big concern. ORAM constructions are promising solutions to this issue, but their application in practice has been impeded by the high communication and storage overheads incurred. Towards addressing this challenge, this paper proposes a segmentation-based ORAM (S-ORAM). It adopts two segment-based techniques, namely, piece-wise shuffling and segment-based query, to improve the performance of shuffling and query by factoring block size into design. Extensive security analysis proves that S-ORAM is a highly secure solution with a negligible failure probability of O(N-log N). In terms of communication and storage overheads, S-ORAM outperforms the Balanced ORAM (B-ORAM) and the Path ORAM (P-ORAM), which are the state-of-the-art hash and index based ORAMs respectively, in both practical and theoretical evaluations. Particularly under practical settings, the communication overhead of S-ORAM is 12 to 23 times less than B-ORAM when they have the same constant-size user-side storage, and S-ORAM consumes 80% less server-side storage and around 60% to 72% less bandwidth than P-ORAM when they have the similar logarithmic-size user-side storage.
Keywords: access pattern, data outsourcing, oblivious RAM, privacy (ID#: 15-6122)
URL: http://doi.acm.org/10.1145/2590296.2590323
Loi Luu, Shweta Shinde, Prateek Saxena, Brian Demsky; “A Model Counter for Constraints over Unbounded Strings,” PLDI ’14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 565-576. doi:10.1145/2594291.2594331
Abstract: Model counting is the problem of determining the number of solutions that satisfy a given set of constraints. Model counting has numerous applications in the quantitative analyses of program execution time, information flow, combinatorial circuit designs as well as probabilistic reasoning. We present a new approach to model counting for structured data types, specifically strings in this work. The key ingredient is a new technique that leverages generating functions as a basic primitive for combinatorial counting. Our tool SMC which embodies this approach can model count for constraints specified in an expressive string language efficiently and precisely, thereby outperforming previous finite-size analysis tools. SMC is expressive enough to model constraints arising in real-world JavaScript applications and UNIX C utilities. We demonstrate the practical feasibility of performing quantitative analyses arising in security applications, such as determining the comparative strengths of password strength meters and determining the information leakage via side channels.
Keywords: (not provided) (ID#: 15-6123)
URL: http://doi.acm.org/10.1145/2666356.2594331
Suman Phangal, Mukesh Kumar; “A Dual Security Scheme Using DNA Key-Based DNA Cryptography,” ICTCS ’14 Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies, November 2014, Article No. 37. doi:10.1145/2677855.2677882
Abstract: Cryptography is one of the most traditional and secure approach to provide reliable transmission over the web. The presented work is the improvement over the traditional symmetric cryptography approach by including the concept of DNA Sequencing. In this work, a two stage model is presented to improve the DNA Cryptography. This cryptography model uses the DNA Sequence as the Input Key to the system as well as uses the DNA object based substitution for cryptography. The work is applied on Images. The analysis of work is done under MSE and PSNR values. The obtained result shows the effective generation of cryptography image.
Keywords: Cryptography, DNA, MSE, PSNR, Secure (ID#: 15-6124)
URL: http://doi.acm.org/10.1145/2677855.2677882
Hamidreza Ghafghazi, Amr El Mougy, Hussein T. Mouftah, Carlisle Adams; “Classification of Technological Privacy Techniques for LTE-Based Public Safety Networks,” Q2SWinet ’14 Proceedings of the 10th ACM symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 41-50. doi:10.1145/2642687.2642693
Abstract: Public Protection and Disaster Relief (PPDR) organizations emphasize the need for dedicated and broadband Public Safety Networks (PSNs) with the capability of providing a high level of security for critical communications. Considering the preceding fact, Long Term Evolution (LTE) has been chosen as the leading candidate technology for PSNs. However, a study of privacy challenges and requirements in LTE-based PSNs has not yet emerged. This paper aims to highlight those challenges and further discusses possible scenarios in which privacy might be violated in this particular environment. Then, a classification of technological privacy techniques is proposed in order to protect and enhance privacy in LTE-based PSNs. The given classification is a useful means for comparison and assessment of applicable privacy preserving methods. Moreover, our classification highlights further requirements and open problems for which available privacy techniques are not sufficient.
Keywords: long term evolution, privacy, private information retrieval, public safety networks (ID#: 15-6125)
URL: http://doi.acm.org/10.1145/2642687.2642693
Se Eun Oh, Ji Young Chun, Limin Jia, Deepak Garg, Carl A. Gunter, Anupam Datta; “Privacy-Preserving Audit for Broker-Based Health Information Exchange,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 313-320. doi:10.1145/2557547.2557576
Abstract: Developments in health information technology have encouraged the establishment of distributed systems known as Health Information Exchanges (HIEs) to enable the sharing of patient records between institutions. In many cases, the parties running these exchanges wish to limit the amount of information they are responsible for holding because of sensitivities about patient information. Hence, there is an interest in broker-based HIEs that keep limited information in the exchange repositories. However, it is essential to audit these exchanges carefully due to risks of inappropriate data sharing. In this paper, we consider some of the requirements and present a design for auditing broker-based HIEs in a way that controls the information available in audit logs and regulates their release for investigations. Our approach is based on formal rules for audit and the use of Hierarchical Identity-Based Encryption (HIBE) to support staged release of data needed in audits and a balance between automated and manual reviews. We test our methodology via an extension of a standard for auditing HIEs called the Audit Trail and Node Authentication Profile (ATNA) protocol.
Keywords: audit, formal logic, health information technology, hierarchical identity based encryption (ID#: 15-6126)
URL: http://doi.acm.org/10.1145/2557547.2557576
David Koll, Jun Li, Xiaoming Fu; “SOUP: An Online Social Network by the People, for the People,” Middleware ’14 Proceedings of the 15th International Middleware Conference, December 2014, Pages 193-204. doi:10.1145/2663165.2663324
Abstract: Concomitant with the tremendous growth of online social networking (OSN) platforms are increasing concerns from users about their privacy and the protection of their data. As user data management is usually centralized, OSN providers nowadays have the unprecedented privilege to access every user’s private data, which makes large-scale privacy leakage at a single site possible. One way to address this issue is to decentralize user data management and replicate user data at individual end-user machines across the OSN. However, such an approach must address new challenges. In particular, it must achieve high availability of the data of every user with minimal replication overhead and without assuming any permanent online storage. At the same time, it needs to provide mechanisms for encrypting user data, controlling access to the data, and synchronizing the replicas. Moreover, it has to scale with large social networks and be resilient and adaptive in handling both high churn of regular participants and attacks from malicious users. While recent works in this direction only show limited success, we introduce a new, decentralized OSN called the Self-Organized Universe of People (SOUP). SOUP employs a scalable, robust and secure mirror selection design and can effectively distribute and manage encrypted user data replicas throughout the OSN. An extensive evaluation by simulation and a real-world deployment show that SOUP addresses all aforementioned challenges.
Keywords: OSN, decentralized OSN, online social networks, privacy (ID#: 15-6127)
URL: http://doi.acm.org/10.1145/2663165.2663324
Jude C. Nelson, Larry L. Peterson; “Syndicate: Virtual Cloud Storage Through Provider Composition,” BigSystem ’14 Proceedings of the 2014 ACM International Workshop on Software-Defined Ecosystems, June 2014, Pages 1-8. doi:10.1145/2609441.2609639
Abstract: Syndicate is a storage service that builds a coherent storage abstraction from already-deployed commodity components, including cloud storage, edge caches, and dataset providers. It is unique in that it not only offers consistent semantics across multiple providers, but also offers a flexible programming model to applications so they can define their own provider-agnostic storage functionality. In doing so, Syndicate fully decouples applications from providers, allowing applications to choose them based on how well they enhance data locality and durability, instead of whether or not they provide requisite features. This paper presents the motivation and design of Syndicate, and gives the results of a preliminary evaluation showing that separating storage functionality from provider implementation is feasible in practice.
Keywords: service composition, software-defined storage, storage gateway (ID#: 15-6128)
URL: http://doi.acm.org/10.1145/2609441.2609639
Varunya Attasena, Nouria Harbi, Jérôme Darmont; “fVSS: A New Secure and Cost-Efficient Scheme for Cloud Data Warehouses,” DOLAP ’14 Proceedings of the 17th International Workshop on Data Warehousing and OLAP, November 2014, Pages 81-90. doi:10.1145/2666158.2666173
Abstract: Cloud business intelligence is an increasingly popular choice to deliver decision support capabilities via elastic, pay-per-use resources. However, data security issues are one of the top concerns when dealing with sensitive data. In this paper, we propose a novel approach for securing cloud data warehouses by flexible verifiable secret sharing, fVSS. Secret sharing encrypts and distributes data over several cloud service providers, thus enforcing data privacy and availability. fVSS addresses four shortcomings in existing secret sharing-based approaches. First, it allows refreshing the data warehouse when some service providers fail. Second, it allows on-line analysis processing. Third, it enforces data integrity with the help of both inner and outer signatures. Fourth, it helps users control the cost of cloud warehousing by balancing the load among service providers with respect to their pricing policies. To illustrate fVSS’ efficiency, we thoroughly compare it with existing secret sharing-based approaches with respect to security features, querying power and data storage and computing costs.
Keywords: OLAP, cloud computing, data availability, data integrity, data privacy, data warehouses, secret sharing (ID#: 15-6129)
URL: http://doi.acm.org/10.1145/2666158.2666173
Tomäš Pevný, Andrew D. Ker; “Steganographic Key Leakage Through Payload Metadata,” IH&MMSec ’14 Proceedings of the 2nd ACM Workshop on Information Hiding and Multimedia Security, June 2014, Pages 109-114. doi:10.1145/2600918.2600921
Abstract: The only steganalysis attack which can provide absolute certainty about the presence of payload is one which finds the embedding key. In this paper we consider refined versions of the key exhaustion attack exploiting metadata such as message length or decoding matrix size, which must be stored along with the payload. We show simple errors of implementation lead to leakage of key information and powerful inference attacks; furthermore, complete absence of information leakage seems difficult to avoid. This topic has been somewhat neglected in the literature for the last ten years, but must be considered in real-world implementations.
Keywords: bayesian inference, brute-force attack, key leakage, steganographic security (ID#: 15-6130)
URL: http://doi.acm.org/10.1145/2600918.2600921
Greig Paul, James Irvine; “Privacy Implications of Wearable Health Devices,” SIN ’14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Page 117. doi:10.1145/2659651.2659683
Abstract: With the recent rise in popularity of wearable personal health monitoring devices, a number of concerns regarding user privacy are raised, specifically with regard to how the providers of these devices make use of the data obtained from these devices, and the protections that user data enjoys. With waterproof monitors intended to be worn 24 hours per day, and companion smartphone applications able to offer analysis and sharing of activity data, we investigate and compare the privacy policies of four services, and the extent to which these services protect user privacy, as we find these services do not fall within the scope of existing legislation regarding the privacy of health data. We then present a set of criteria which would preserve user privacy, and avoid the concerns identified within the policies of the services investigated.
Keywords: Health monitoring, privacy, security, wearables (ID#: 15-6131)
URL: http://doi.acm.org/10.1145/2659651.2659683
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Security Measurement and Metric Methods, 2014 |
Measurement and metrics are hard problems in the Science of Security. The research cited here looks at methods and techniques of developing valid measurement. This work was presented in 2014.
Moeti, M.; Kalema, B.M., "Analytical Hierarchy Process Approach for the Metrics of Information Security Management Framework," Computational Intelligence, Communication Systems and Networks (CICSyN), 2014 Sixth International Conference on, vol., no., pp. 89, 94, 27-29 May 2014. doi:10.1109/CICSyN.2014.31
Abstract: Organizations' information technology systems are increasingly being attacked and exposed to risks that lead to loss of valuable information and money. The systems and applications of vulnerability are basically, networks, databases, web services, internet-based services and communications, mobile technologies and people issues associated with them. The major objective of this study therefore, was to identify metrics needed for the development of an information security management framework. From related literature, relevant metrics were identified using textual analysis and grouped into six categories of, organizational, environmental, contingency management, security policy, internal control, and information and risk management. These metrics were validated in a framework by using the analytical hierarchical process (AHP) method. Results of the study indicated that, environmental metrics play a critical role in the information security management as compared to other metrics whereas the information and risk management metrics was found to be not so significant during the rankings. This study contributes to the information security management body of knowledge by providing a single empirically validated framework that will be used theoretically to extend research in the domain of the study and practically by management while making decisions relating to security management.
Keywords: Internet; analytic hierarchy process; risk management; security of data; AHP; Internet-based services; Web services; analytical hierarchy process approach; databases; information security management framework metrics; mobile technologies; organizations information technology systems; risk management metrics; security management; Contingency management; Educational institutions; Information security; Measurement; Organizations; Risk management; analytical hierarchical process; information security metrics; integrated system theory; theories of information security (ID#: 15-6060)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7059150&isnumber=7058962
Manandhar, K.; Xiaojun Cao; Fei Hu; Yao Liu, "Combating False Data Injection Attacks in Smart Grid Using Kalman Filter," Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 16, 20, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785297
Abstract: The security of Smart Grid, being one of the very important aspects of the Smart Grid system, is studied in this paper. We first discuss different pitfalls in the security of the Smart Grid system considering the communication infrastructure among the sensors, actuators, and control systems. Following that, we derive a mathematical model of the system and propose a robust security framework for power grid. To effectively estimate the variables of a wide range of state processes in the model, we adopt Kalman Filter in the framework. The Kalman Filter estimates and system readings are then fed into the χ2-square detectors and the proposed Euclidean detectors, which can detect various attacks and faults in the power system including False Data Injection Attacks. The χ2-detector is a proven-effective exploratory method used with Kalman Filter for the measurement of the relationship between dependent variables and a series of predictor variables. The χ2-detector can detect system faults/attacks such as replay and DoS attacks. However, the study shows that the X2-detector detectors are unable to detect statistically derived False Data Injection Attacks while the Euclidean distance metrics can identify such sophisticated injection attacks.
Keywords: Kalman filters; computer network security; electric sensing devices; fault diagnosis; power engineering computing; power system faults; power system security; power system state estimation; smart power grids; X2-square detector; DoS attacks; Euclidean detector; Euclidean distance metrics; Kalman filter; actuators; communication infrastructure; control systems; false data injection attack detection; fault detection; mathematical model; power system; predictor variable series; sensors; smart power grid security; state process; Detectors; Equations; Kalman filters; Mathematical model; Security; Smart grids (ID#: 15-6061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785297&isnumber=6785290
Karabat, C.; Topcu, B., "How to Assess Privacy Preservation Capability of Biohashing Methods?: Privacy Metrics," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, vol., no., pp. 2217, 2220, 23-25 April 2014. doi:10.1109/SIU.2014.6830705
Abstract: In this paper, we evaluate privacy preservation capability of biometric hashing methods. Although there are some work on privacy evaluation of biometric template protection methods in the literature, they fail to cover all biometric template protection methods. To the best of our knowledge, there is no work on privacy metrics and assessment for biometric hashing methods. We use several metrics under different threat scenarios to assess privacy protection level of biometric hashing methods in this work. The simulation results demonstrate that biometric hash vectors may leak private information especially under advanced threat scenarios.
Keywords: authorisation; biometrics (access control); data protection; biometric hash vectors; biometric hashing methods; biometric template protection methods; privacy metrics; privacy preservation capability assessment; privacy preservation capability evaluation; privacy protection level assessment; private information leakage; threat scenarios; Conferences; Internet; Measurement; Privacy; Security; Signal processing; Simulation; biometric; biometric hash; metrics; privacy (ID#: 15-6062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830705&isnumber=6830164
Hong, J.B.; Dong Seong Kim; Haqiq, A., "What Vulnerability Do We Need to Patch First?," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, vol., no., pp. 684, 689, 23-26 June 2014. doi:10.1109/DSN.2014.68
Abstract: Computing a prioritized set of vulnerabilities to patch is important for system administrators to determine the order of vulnerabilities to be patched that are more critical to the network security. One way to assess and analyze security to find vulnerabilities to be patched is to use attack representation models (ARMs). However, security solutions using ARMs are optimized for only the current state of the networked system. Therefore, the ARM must reanalyze the network security, causing multiple iterations of the same task to obtain the prioritized set of vulnerabilities to patch. To address this problem, we propose to use importance measures to rank network hosts and vulnerabilities, then combine these measures to prioritize the order of vulnerabilities to be patched. We show that nearly equivalent prioritized set of vulnerabilities can be computed in comparison to an exhaustive search method in various network scenarios, while the performance of computing the set is dramatically improved, while equivalent solutions are computed in various network scenarios.
Keywords: security of data; ARM; attack representation models; importance measures; network hosts; network security; networked system; prioritized set; security solutions; system administrators; vulnerability patch; Analytical models; Computational modeling; Equations; Mathematical model; Measurement; Scalability; Security; Attack Representation Model; Network Centrality; Security Analysis; Security Management; Security Metrics; Vulnerability Patch (ID#: 15-6063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903625&isnumber=6903544
Nascimento, Z.; Sadok, D.; Fernandes, S.; Kelner, J., "Multi-Objective Optimization of a Hybrid Model for Network Traffic Classification by Combining Machine Learning Techniques," Neural Networks (IJCNN), 2014 International Joint Conference on, vol., no., pp. 2116, 2122, 6-11 July 2014. doi:10.1109/IJCNN.2014.6889935
Abstract: Considerable effort has been made by researchers in the area of network traffic classification, since the Internet is constantly changing. This characteristic makes the task of traffic identification not a straightforward process. Besides that, encrypted data is being widely used by applications and protocols. There are several methods for classifying network traffic such as known ports and Deep Packet Inspection (DPI), but they are not effective since many applications constantly randomize their ports and the payload could be encrypted. This paper proposes a hybrid model that makes use of a classifier based on computational intelligence, the Extreme Learning Machine (ELM), along with Feature Selection (FS) and Multi-objective Genetic Algorithms (MOGA) to classify computer network traffic without making use of the payload or port information. The proposed model presented good results when evaluated against the UNIBS data set, using four performance metrics: Recall, Precision, Flow Accuracy and Byte Accuracy, with most rates exceeding 90%. Besides that, presented the best features and feature selection algorithm for the given problem along with the best ELM parameters.
Keywords: Internet; computer network security; cryptography; feature selection; genetic algorithms; learning (artificial intelligence); pattern classification; protocols; telecommunication traffic; DPI; ELM parameters; Internet; MOGA; UNIBS data set; byte accuracy; computational intelligence; computer network traffic classification; deep packet inspection; encrypted data; extreme learning machine; feature selection algorithm; flow accuracy; hybrid model; machine learning techniques; multiobjective genetic algorithms; multiobjective optimization; payload encryption; precision; protocols; recall; traffic identification; Accuracy; Computational modeling; Genetic algorithms; Measurement; Optimization; Ports (Computers); Protocols (ID#: 15-6064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889935&isnumber=6889358
Hatzivasilis, G.; Papaefstathiou, I.; Manifavas, C.; Papadakis, N., "A Reasoning System for Composition Verification and Security Validation," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp. 1, 4, March 30 2014-April 2 2014. doi:10.1109/NTMS.2014.6814001
Abstract: The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language.
Keywords: Java; embedded systems; formal specification; formal verification; reasoning about programs; security of data; software metrics; temporal logic; DECKT; JAVA programming language; composition validation; composition verification; dynamic embedded system composition; epistemic reasoner; event calculus; formal methods; model-based framework; performance metrics; reasoning system; rule engine JESS; security evaluation; security validation; system specification; system-of-systems; Cognition; Computational modeling; Embedded systems; Measurement; Protocols; Security; Unified modeling language (ID#: 15-6064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963
Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol., no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845212
Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-of-breed software assurance methods to prevent the vulnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC; US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-6066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183
Chulhee Lee; Jiheon Ok; Guiwon Seo, "Objective Video Quality Measurement Using Embedded VQMs," Heterogeneous Networking for Quality, Reliability, Security and Robustness (QShine), 2014 10th International Conference on, vol., no., pp. 129 ,130, 18-20 Aug. 2014. doi:10.1109/QSHINE.2014.6928671
Abstract: Video quality monitoring has become an important issue as multimedia data is increasingly being transmitted over the Internet and wireless channels where transmission errors can frequently occur. Although no-reference models are suitable to such applications, current no-reference methods do not provide acceptable performance. In this paper, we propose an objective video quality assessment method using embedded video quality metrics (VQMs). In the proposed method, the video quality of encoded video data is computed at the transmitter during the encoding process. The computed VQMs are embedded in the compressed data. If there are no transmission errors, the video quality at the receiver would be identical with that of the transmitting side. If there are transmission errors, the receiver adjusts the embedded VQMs by taking into account the effects of transmission errors. The proposed method is fast and provides good performance.
Keywords: data compression; video coding; embedded VQM; embedded video quality metric; multimedia data; objective video quality measurement; video data encoding; video quality monitoring; Bit rate; Neural networks; Quality assessment; Receivers; Transmitters; Video recording; Video sequences; embedded VQM; no-reference; quality monitoring; video quality assessment (ID#: 15-6067)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928671&isnumber=6928645
Shangdong Liu; Jian Gong; Jianxin Chen; Yanbing Peng; Wang Yang; Weiwei Zhang; Jakalan, A., "A Flow Based Method to Detect Penetration," Advanced Infocomm Technology (ICAIT), 2014 IEEE 7th International Conference on, vol., no., pp. 184, 191, 14-16 Nov. 2014. doi:10.1109/ICAIT.2014.7019551
Abstract: With the rapid expansion of the Internet, network security has become more and more important. IDS (Intrusion Detection System) is an important technology coping network attacks and is of two main types: network based (NIDS) and host based (HIDS). In this paper, we propose the conception of NFPPB (Network Flow Patterns of Penetrating Behavior) to network vulnerable ports and design a NIDS algorithm to detect infiltration behaviors of attacker. Essentially, NFPPB is a set of metrics calculated by network activities exploiting the vulnerabilities of hosts. The paper investigates choosing, generation and comparison of NFPPB metrics. Experiments show that the method is effective and with high efficiency. At last, the paper addresses the future direction and the points that need to be improved.
Keywords: computer network security; IDS; flow based method; intrusion detection system; network attacks; network flow patterns of penetrating behavior; network security; network vulnerable ports; Educational institutions; IP networks; Law; Measurement; Ports (Computers); Security; Flow Records; IDS; Infiltration Detection; Penetration Detection (ID#: 15-6068)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019551&isnumber=7019521
Feng Li; Chin-Tser Huang; Jie Huang; Wei Peng, "Feedback-Based Smartphone Strategic Sampling for BYOD Security," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on , vol., no., pp.1, 8, 4-7 Aug. 2014. doi:10.1109/ICCCN.2014.6911814
Abstract: Bring Your Own Device (BYOD) is an information technology (IT) policy that allows employees to use their own wireless devices to access internal network at work. Mobile malware is a major security concern that impedes BYOD's further adoption in enterprises. Existing works identify the need for better BYOD security mechanisms that balance between the strength of such mechanisms and the costs of implementing such mechanisms. In this paper, based on the idea of self-reinforced feedback loop, we propose a periodic smartphone sampling mechanism that significantly improve BYOD security mechanism's effectiveness without incurring further costs. We quantify the likelihood that “a BYOD smartphone is infected by malware” by two metrics, vulnerability and uncertainty, and base the iterative sampling process on these two metrics; the updated values of these metrics are fed back into future rounds of the mechanism to complete the feedback loop. We validate the efficiency and effectiveness of the proposed strategic sampling via simulations driven by publicly available, real-world collected traces.
Keywords: invasive software; iterative methods; mobile computing; sampling methods; smart phones; telecommunication security; BYOD security; BYOD smartphone; Bring Your Own Device; IT policy; feedback-based smartphone strategic sampling; information technology; iterative sampling process; mobile malware; periodic smartphone sampling mechanism; self-reinforced feedback loop; wireless device; Feedback loop; Malware; Measurement; Topology; Uncertainty; Wireless communication; Enterprise network; probabilistic algorithm; smartphone security; social network; strategic sampling; uncertainty metric; vulnerability metric (ID#: 15-6069)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911814&isnumber=6911704
Vaarandi, R.; Pihelgas, M., "Using Security Logs for Collecting and Reporting Technical Security Metrics," Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 294, 299, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.53
Abstract: During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Keywords: Big Data; computer network security; big data; log analysis methods; log analysis techniques; open source technology; security logs; technical security metric collection; technical security metric reporting; Correlation; Internet; Measurement; Monitoring; Peer-to-peer computing; Security; Workstations; security log analysis; security metrics (ID#: 15-6070)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956774&isnumber=6956719
Scholtz, J.; Endert, A., "User-Centered Design Guidelines for Collaborative Software for Intelligence Analysis," Collaboration Technologies and Systems (CTS), 2014 International Conference on, vol., no., pp. 478, 482, 19-23 May 2014. doi:10.1109/CTS.2014.6867610
Abstract: In this position paper we discuss the necessity of using User-Centered Design (UCD) methods in order to design collaborative software for the intelligence community. We discuss a number of studies of collaboration in the intelligence community and use this information to provide some guidelines for collaboration software.
Keywords: groupware; police data processing; user centred design; UCD methods; collaborative software; intelligence analysis; intelligence community; user-centered design guidelines; Collaborative software; Communities; Guidelines; Integrated circuits; Measurement; Software; Intelligence community; collaboration; evaluation; metrics; user-centered design (ID#: 15-6071)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867610&isnumber=6867522
Keramati, Marjan; Keramati, Mahsa, "Novel Security Metrics for Ranking Vulnerabilities in Computer Networks," Telecommunications (IST), 2014 7th International Symposium on, vol., no., pp. 883, 888, 9-11 Sept. 2014. doi:10.1109/ISTEL.2014.7000828
Abstract: By daily increasing appearance of vulnerabilities and various ways of intruding networks, one of the most important fields in network security will be doing network hardening and this can be possible by patching the vulnerabilities. But this action for all vulnerabilities may cause high cost in the network and so, we should try to eliminate only most perilous vulnerabilities of the network. CVSS itself can score vulnerabilities based on amount of damage they incur in the network but the main problem with CVSS is that, it can only score individual vulnerabilities without considering its relationship with other vulnerabilities of the network. So, in order to help fill this gap, in this paper we have defined some Attack graph and CVSS-based security metrics that can help us to prioritize vulnerabilities in the network by measuring the probability of exploiting them and also the amount of damage they will impose on the network. Proposed security metrics are defined by considering interaction between all vulnerabilities of the network. So our method can rank vulnerabilities based on the network they exist in. Results of applying these security metrics on one well-known network example are also shown that can demonstrates effectiveness of our approach.
Keywords: computer network security; matrix algebra; probability; CVSS-based security metrics; common vulnerability scoring system; computer network; intruding network security; probability; ranking vulnerability; Availability; Communication networks; Complexity theory; Computer networks; Educational institutions; Measurement; Security; Attack Graph; CVSS; Exploit; Network hardening; Security Metric; Vulnerability (ID#: 15-6072)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000828&isnumber=7000650
Samuvelraj, G.; Nalini, N., "A Survey of Self Organizing Trust Method to Avoid Malicious Peers from Peer to Peer Network," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, vol., no., pp. 1, 4, 6-8 March 2014. doi:10.1109/ICGCCEE.2014.6921379
Abstract: Networks are subject to attacks from malicious sources. Sending the data securely over the network is one of the most tedious processes. A peer-to-peer (P2P) network is a type of decentralized and distributed network architecture in which individual nodes in the network act as both servers and clients of resources. Peer to peer systems are incredibly flexible and can be used for wide range of functions and also a Peer to peer (P2P) system prone to malicious attacks. To provide a security over peer to peer system the self-organizing trust model has been proposed. Here the trustworthiness of the peers has been calculated based on past interactions and recommendations. The interactions and recommendations are evaluated based on importance, recentness, and satisfaction parameters. By this the good peers were able to form trust relationship in their proximity and avoids the malicious peers.
Keywords: client-server systems; computer network security; fault tolerant computing; peer-to-peer computing; recommender systems; trusted computing; P2P network; client-server resources; decentralized network architecture; distributed network architecture; malicious attacks; malicious peers; malicious sources; peer to peer network; peer to peer systems; peer trustworthiness; satisfaction parameters;self organizing trust method; self-organizing trust model; Computer science; History; Measurement; Organizing; Peer-to-peer computing; Security; Servers; Metrics; Network Security; Peer to Peer; SORT (ID#: 15-6073)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921379&isnumber=6920919
Desouky, A.F.; Beard, M.D.; Etzkorn, L.H., "A Qualitative Analysis of Code Clones and Object Oriented Runtime Complexity Based on Method Access Points," Convergence of Technology (I2CT), 2014 International Conference for, vol., no., pp. 1, 5, 6-8 April 2014. doi:10.1109/I2CT.2014.7092292
Abstract: In this paper, we present a new object oriented complexity metric based on runtime method access points. Software engineering metrics have traditionally indicated the level of quality present in a software system. However, the analysis and measurement of quality has long been captured at compile time, rendering useful results, although potentially incomplete, since all source code is considered in metric computation, versus the subset of code that actually executes. In this study, we examine the runtime behavior of our proposed metric on an open source software package, Rhino 1.7R4. We compute and validate our metric by correlating it with code clones and bug data. Code clones are considered to make software more complex and harder to maintain. When cloned, a code fragment with an error quickly transforms into two (or more) errors, both of which can affect the software system in unique ways. Thus a larger number of code clones is generally considered to indicate poorer software quality. For this reason, we consider that clones function as an external quality factor, in addition to bugs, for metric validation.
Keywords: object-oriented programming; program verification; public domain software; security of data; software metrics; software quality; source code (software); Rhino 1.7R4; bug data; code clones; metric computation; metric validation; object oriented runtime complexity; open source software package; qualitative analysis; runtime method access points; software engineering metrics; source code; Cloning; Complexity theory; Computer bugs; Correlation; Measurement; Runtime; Software; Code Clones; Complexity; Object Behavior; Object Oriented Runtime Metrics; Software Engineering (ID#: 15-6074)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092292&isnumber=7092013
Snigurov, A.; Chakrian, V., "The DoS Attack Risk Calculation Based on the Entropy Method and Critical System Resources Usage," Infocommunications Science and Technology, 2014 First International Scientific-Practical Conference Problems of, vol., no., pp. 186, 187, 14-17 Oct. 2014. doi:10.1109/INFOCOMMST.2014.6992346
Abstract: The paper is focused on algorithm of denial of service risk calculation using the entropy method and considering the additional coefficients of critical system resource usage on the network node. Further the decisions of traffic routing or prevention of attack can be chosen based on the level of risk.
Keywords: computer network security; telecommunication traffic; DoS attack risk calculation; critical system resource usage; denial of service risk calculation; entropy method; traffic routing; Computer crime; Entropy; Information security; Random access memory; Routing; Sockets; Time measurement; DoS attack; entropy; information security risk; routing metrics (ID#: 15-6075)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992346&isnumber=6992271
Zabasta, A.; Casaliccio, E.; Kunicina, N.; Ribickis, L., "A Numerical Model for Evaluation Power Outages Impact on Water Infrastructure Services Sustainability," Power Electronics and Applications (EPE'14-ECCE Europe), 2014 16th European Conference on, vol., no., pp. 1, 10, 26-28 Aug. 2014. doi:10.1109/EPE.2014.6910703
Abstract: Critical infrastructure's (CI) (electricity, heat, water, information and communication technology networks) security, stability and reliability are closely related to the interaction phenomenon. Due to the increasing amount of data transferred, increases dependence on telecommunications and internet services, the data integrity and security is becoming a very important aspect for the utility services providers and energy suppliers. In such circumstances, the need is increasing for methods and tools that enable infrastructure managers to evaluate and predict their critical infrastructure operations as the failures, emergency or service degradation occur in other related infrastructures. Using a simulation model, is experimentally tested a method that allows to explore the water supply network nodes the average down time dependence on the battery life time and the battery replacement time cross-correlations, within the parameters set, when outages in power infrastructure arise and taking into account also the impact of telecommunication nodes. The model studies the real case of Latvian city Ventspils. The proposed approach for the analysis of critical infrastructures interdependencies will be useful for practical adoption of methods, models and metrics for CI operators and stakeholders.
Keywords: critical infrastructures; polynomial approximation; power system reliability; power system security; power system stability; water supply; CI operators; average down time dependence; battery life time; battery replacement time cross-correlations; critical infrastructure operations; critical infrastructure security; critical infrastructures interdependencies; data integrity; data security; energy suppliers; infrastructure managers; interaction phenomenon; internet services; power infrastructure outages; stakeholders; telecommunication nodes; utility services providers; water supply network nodes; Analytical models; Batteries; Mathematical model; Measurement; Power supplies; Telecommunications; Unified modeling language; Estimation technique; Fault tolerance; Modelling; Simulation (ID#: 15-6076)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910703&isnumber=6910682
Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., "Outmet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-6077)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725
Cain, A.A.; Schuster, D., "Measurement of Situation Awareness Among Diverse Agents in Cyber Security," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on, vol., no., pp. 124, 129, 3-6 March 2014. doi:10.1109/CogSIMA.2014.6816551
Abstract: Development of innovative algorithms, metrics, visualizations, and other forms of automation are needed to enable network analysts to build situation awareness (SA) from large amounts of dynamic, distributed, and interacting data in cyber security. Several models of cyber SA can be classified as taking an individual or a distributed approach to modeling SA within a computer network. While these models suggest ways to integrate the SA contributed by multiple actors, implementing more advanced data center automation will require consideration of the differences and similarities between human teaming and human-automation interaction. The purpose of this paper is to offer guidance for quantifying the shared cognition of diverse agents in cyber security. The recommendations presented can inform the development of automated aids to SA as well as illustrate paths for future empirical research.
Keywords: cognition; security of data; SA; cyber security; data center automation; diverse agents; shared cognition; situation awareness measurement; Automation; Autonomous agents; Cognition; Computer security; Data models; Sociotechnical systems; Situation awareness; cognition; cyber security; information security; teamwork (ID#: 15-6078)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816551&isnumber=6816529
Sirsikar, S.; Salunkhe, J., "Analysis of Data Hiding Using Digital Image Signal Processing," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on, vol., no., pp. 134, 139, 9-11 Jan. 2014. doi:10.1109/ICESC.2014.28
Abstract: Data hiding process embeds data into digital media for the purpose of security. Digital image is one of the best media to store data. It provides large capacity for hiding secret information which results into stego-image imperceptible to human vision, a novel steganographic approach based on data hiding method such as pixel-value differencing. This method provides both high embedding capacity and outstanding imperceptibility for the stego-image. In this paper, different image processing techniques are described for data hiding related to pixel value differencing. Pixel Value Differencing based techniques is carried out to produce modified data hiding method. Hamming is an error correcting method which is useful to hide some information where lost bit are detected and corrected. OPAP is used to minimize embedding error thus quality of stego-image is improved without disturbing secret data. ZigZag method enhances security and quality of image. In modified method Hamming, OPAP and ZigZag methods are combined. In adaptive method image is divided into blocks and then data will be hidden. Objective of the proposed work is to increase the stego-image quality as well as increase capacity of secret data. Result analysis compared for BMP images only, with calculation of evaluation metrics i.e. MSE, PSNR and SSIM.
Keywords: image processing; steganography; BMP images; MSE; OPAP; PSNR; SSIM; ZigZag method; data hiding analysis; data hiding method; data hiding process; digital image signal processing; digital media; embedding capacity; error correcting method; human vision; pixel value differencing; pixel-value differencing; secret information hiding;security; steganographic approach; stego-image imperceptible; stego-image quality; Color; Cryptography; Digital images; Image quality; Measurement; PSNR; Data Hiding; Digital image; Pixel Value Differencing; Steganography (ID#: 15-6079)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745360&isnumber=6745317
Younis, A.A.; Malaiya, Y.K., "Using Software Structure to Predict Vulnerability Exploitation Potential," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, vol., no., pp. 13, 18, June 30 2014-July 2 2014. doi:10.1109/SERE-C.2014.17
Abstract: Most of the attacks on computer systems are due to the presence of vulnerabilities in software. Recent trends show that number of newly discovered vulnerabilities still continue to be significant. Studies have also shown that the time gap between the vulnerability public disclosure and the release of an automated exploit is getting smaller. Therefore, assessing vulnerabilities exploitability risk is critical as it aids decision-makers prioritize among vulnerabilities, allocate resources, and choose between alternatives. Several methods have recently been proposed in the literature to deal with this challenge. However, these methods are either subjective, requires human involvement in assessing exploitability, or do not scale. In this research, our aim is to first identify vulnerability exploitation risk problem. Then, we introduce a novel vulnerability exploitability metric based on software structure properties viz.: attack entry points, vulnerability location, presence of dangerous system calls, and reachability. Based on our preliminary results, reachability and the presence of dangerous system calls appear to be a good indicator of exploitability. Next, we propose using the suggested metric as feature to construct a model using machine learning techniques for automatically predicting the risk of vulnerability exploitation. To build a vulnerability exploitation model, we propose using Support Vector Machines (SVMs). Once the predictor is built, given unseen vulnerable function and their exploitability features the model can predict whether the given function is exploitable or not.
Keywords: decision making; learning (artificial intelligence); reachability analysis; software metrics; support vector machines; SVM; attack entry points; computer systems; decision makers; machine learning; reachability; software structure; support vector machines; vulnerabilities exploitability risk; vulnerability exploitability metric; vulnerability exploitation model; vulnerability exploitation potential; vulnerability exploitation risk problem; vulnerability location; vulnerability public disclosure; Feature extraction; Predictive models; Security; Software; Software measurement; Support vector machines; Attack Surface; Machine Learning; Measurement; Risk Assessment; Software Security Metrics; Software Vulnerability (ID#: 15-6080)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901635&isnumber=6901618
Younis, A.A.; Malaiya, Y.K.; Ray, I., "Using Attack Surface Entry Points and Reachability Analysis to Assess the Risk of Software Vulnerability Exploitability," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, vol., no., pp.1, 8, 9-11 Jan. 2014. doi:10.1109/HASE.2014.10
Abstract: An unpatched vulnerability can lead to security breaches. When a new vulnerability is discovered, it needs to be assessed so that it can be prioritized. A major challenge in software security is the assessment of the potential risk due to vulnerability exploitability. CVSS metrics have become a de facto standard that is commonly used to assess the severity of a vulnerability. The CVSS Base Score measures severity based on exploitability and impact measures. CVSS exploitability is measured based on three metrics: Access Vector, Authentication, and Access Complexity. However, CVSS exploitability measures assign subjective numbers based on the views of experts. Two of its factors, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence we do not know if it considers software properties as a factor. In this paper, we propose an approach that assesses the risk of vulnerability exploitability based on two software properties - attack surface entry points and reach ability analysis. A vulnerability is reachable if it is located in one of the entry points or is located in a function that is called either directly or indirectly by the entry points. The likelihood of an entry point being used in an attack can be assessed by using damage potential-effort ratio in the attack surface metric and the presence of system calls deemed dangerous. To illustrate the proposed method, five reported vulnerabilities of Apache HTTP server 1.3.0 have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can yield a risk assessment that can be different from the CVSS Base Score.
Keywords: reachability analysis; risk management; security of data; software metrics; Apache HTTP server 1.3.0; CVSS base score; CVSS exploitability; CVSS metrics; access complexity; access vector; attack surface entry point; attack surface metric; authentication; damage potential-effort ratio; reachability analysis; risk assessment; security breach; severity measurement; software security; software vulnerability exploitability ;Authentication; Complexity theory; Measurement; Servers; Software;Vectors; Attack Surface; CVSS Metrics; Measurement; Risk assessment; Software Security Metrics; Software Vulnerability (ID#: 15-6081)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754581&isnumber=6754569
Akbar, M.; Sukmana, H.T.; Khairani, D., "Models and Software Measurement Using Goal/Question/Metric Method and CMS Matrix Parameter (Case Study Discussion Forum)," Cyber and IT Service Management (CITSM), 2014 International Conference on, vol., no., pp. 34, 38, 3-6 Nov. 2014. doi:10.1109/CITSM.2014.7042171
Abstract: Existence of a CMS as a tool in making a website has been used extensively by the communities. Currently, there are many CMS available as options, especially CMS bulletin board. The number of options is an obstacle for someone to choose a suitable CMS to fulfill their needs. Because of the lack of research on this CMS bulletin board comparison, this research tries to compare and search the best CMS bulletin board. This research uses metrics for modeling and software measurement to identify the characteristics of existing CMS bulletin board. This research used Goal/Question/Metric (GQM) for modelling method and CMS Matrix as the parameters. As for the CMS bulletin board, in this study, we choose PhpBB, MyBB, and SMF. The results of this study indicate that SMF bulletin board has the best score compared to MyBB and phpBB CMS bulletin board.
Keywords: content management; software development management; software metrics; CMS bulletin board; CMS matrix parameter; GQM; MyBB; PhpBB; SMF; Website; goal-question-metric method; software measurement; Browsers; Databases; Operating systems; Security; Software measurement; CMS; Software Measurement (ID#: 15-6082)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042171&isnumber=7042158
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Security Scalability and Big Data, 2014 |
Scalability is a hard problem in the Science of Security. Applied to Big Data, the problems of scaling security systems are compounded. The work cited here addresses the problem and was presented in 2014.
Eberle, W.; Holder, L., "A Partitioning Approach to Scaling Anomaly Detection in Graph Streams," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 17, 24, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004367
Abstract: Due to potentially complex relationships among heterogeneous data sets, recent research efforts have involved the representation of this type of complex data as a graph. For instance, in the case of computer network traffic, a graph representation of the traffic might consist of nodes representing computers and edges representing communications between the corresponding computers. However, computer network traffic is typically voluminous, or acquired in real-time as a stream of information. In previous work on static graphs, we have used a compression-based measure to find normative patterns, and then analyzed the close matches to the normative patterns to indicate potential anomalies. However, while our approach has demonstrated its effectiveness in a variety of domains, the issue of scalability has limited this approach when dealing with domains containing millions of nodes and edges. To address this issue, we propose a novel approach called Pattern Learning and Anomaly Detection on Streams, or PLADS, that is not only scalable to real-world data that is streaming, but also maintains reasonable levels of effectiveness in detecting anomalies. In this paper we present a partitioning and windowing approach that partitions the graph as it streams in over time and maintains a set of normative patterns and anomalies. We then empirically evaluate our approach using publicly available network data as well as a dataset that represents e-commerce traffic.
Keywords: data mining; data structures; graph theory; learning (artificial intelligence); pattern classification; security of data; PLADS approach; anomaly detection scaling; computer network traffic; data representation; e-commerce traffic representation; electronic commerce; graph stream; heterogeneous data set; information stream; normative pattern; partitioning approach; pattern learning and anomaly detection on streams; windowing approach; Big data; Computers; Image edge detection; Internet; Scalability; Telecommunication traffic; Graph-based; anomaly detection; knowledge discovery; streaming data (ID#: 15-5786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004367&isnumber=7004197
Sokolov, V.; Alekseev, I.; Mazilov, D.; Nikitinskiy, M., "A Network Analytics System in the SDN," Science and Technology Conference (Modern Networking Technologies) (MoNeTeC), 2014 First International, vol., no., pp. 1, 3, 28-29 Oct. 2014. doi:10.1109/MoNeTeC.2014.6995603
Abstract: The emergence of virtualization and security problems of the network services, their lack of scalability and flexibility force network operators to look for “smarter” tools for network design and management. With the continuous growth of the number of subscribers, the volume of traffic and competition at the telecommunication market, there is a stable interest in finding new ways to identify weak points of the existing architecture, preventing the collapse of the network as well as evaluating and predicting the risks of problems in the network. To solve the problems of increasing the fail-safety and the efficiency of the network infrastructure, we offer to use the analytical software in the SDN context.
Keywords: computer network management; computer network security; network analysers; software defined networking; virtualisation; SDN context; analytical software; fail-safety; force network operators; network analytics system; network design; network infrastructure; network management; network services; security problems; smarter tools; software-defined network; telecommunication market; virtualization; Bandwidth; Data models; Monitoring; Network topology; Ports (Computers); Protocols; Software systems; analysis; analytics; application programming interface; big data; dynamic network model; fail-safety; flow; flow table; heuristic; load balancing; monitoring;network statistics; network topology; openflow; protocol; sdn; smart tool; software system; software-defined networking; weighted graph (ID#: 15-5787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6995603&isnumber=6995568
Chenhui Li; Baciu, G., "VALID: A Web Framework for Visual Analytics of Large Streaming Data," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 686, 692, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.89
Abstract: Visual analytics of increasingly large data sets has become a challenge for traditional in-memory and off-line algorithms as well as in the cognitive process of understanding features at various scales of resolution. In this paper, we attempt a new web-based framework for the dynamic visualization of large data. The framework is based on the idea that no physical device can ever catch up to the analytical demand and the physical requirements of large data. Thus, we adopt a data streaming generator model that serializes the original data into multiple streams of data that can be contained on current hardware. Thus, the scalability of the visual analytics of large data is inherent in the streaming architecture supported by our platform. The platform is based on the traditional server-client model. However, the platform is enhanced by effective analytical methods that operate on data streams, such as binned points and bundling lines that reduce and enhance large streams of data for effective interactive visualization. We demonstrate the effectiveness of our framework on different types of large datasets.
Keywords: Big Data; Internet; client-server systems; data analysis; data visualisation; interactive systems; media streaming; Big Data visualization; VALID; Web framework; data streaming generator model; dynamic data visualization; interactive visualization; server-client model; streaming architecture; Conferences; Privacy; Security; big data; dynamic visualization; streaming data; visual analytics (ID#: 15-5788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011313&isnumber=7011202
Haltas, F.; Uzun, E.; Siseci, N.; Posul, A.; Emre, B., "An Automated Bot Detection System through Honeypots for Large-Scale," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, vol., no., pp. 255, 270, 3-6 June 2014. doi:10.1109/CYCON.2014.6916407
Abstract: One of the purposes of active cyber defense systems is identifying infected machines in enterprise networks that are presumably root cause and main agent of various cyber-attacks. To achieve this, researchers have suggested many detection systems that rely on host-monitoring techniques and require deep packet inspection or which are trained by malware samples by applying machine learning and clustering techniques. To our knowledge, most approaches are either lack of being deployed easily to real enterprise networks, because of practicability of their training system which is supposed to be trained by malware samples or dependent to host-based or deep packet inspection analysis which requires a big amount of storage capacity for an enterprise. Beside this, honeypot systems are mostly used to collect malware samples for analysis purposes and identify coming attacks. Rather than keeping experimental results of bot detection techniques as theory and using honeypots for only analysis purposes, in this paper, we present a novel automated bot-infected machine detection system BFH (BotFinder through Honeypots), based on BotFinder, that identifies infected hosts in a real enterprise network by learning approach. Our solution, relies on NetFlow data, is capable of detecting bots which are infected by most-recent malwares whose samples are caught via 97 different honeypot systems. We train BFH by created models, according to malware samples, provided and updated by 97 honeypot systems. BFH system automatically sends caught malwares to classification unit to construct family groups. Later, samples are automatically given to training unit for modeling and perform detection over NetFlow data. Results are double checked by using full packet capture of a month and through tools that identify rogue domains. Our results show that BFH is able to detect infected hosts with very few false-positive rates and successful on handling most-recent malware families since it is fed by 97 Honeypot and it supports large networks with scalability of Hadoop infrastructure, as deployed in a large-scale enterprise network in Turkey.
Keywords: invasive software; learning (artificial intelligence); parallel processing; pattern clustering; BFH; Hadoop infrastructure; NetFlow data; active cyber defense systems; automated bot detection system; bot detection techniques; bot-infected machine detection system; botfinder through honeypots; clustering technique; cyber-attacks; deep packet inspection; enterprise networks; honeypot systems; host-monitoring techniques; learning approach; machine learning technique; malware; Data models; Feature extraction; Malware; Monitoring; Scalability; Training; Botnet; NetFlow analysis; honeypots; machine learning (ID#: 15-5789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916407&isnumber=6916383
Irudayasamy, A.; Lawrence, A., "Enhanced Anonymization Algorithm to Preserve Confidentiality of Data in Public Cloud," Information Society (i-Society), 2014 International Conference on, vol., no., pp. 86, 91, 10-12 Nov. 2014. doi:10.1109/i-Society.2014.7009017
Abstract: Cloud computing offers immense computation control and storing volume which permit users to organize applications. Many confidential and sensitive presentations like health facilities are constructed on cloud for monetary and working expediency. Generally, information in these requests is masked to safeguard the owner's confidential information, but such information can be possibly despoiled when new information is added to it. Preserving the confidentiality over distributed data sets is still a big challenge in the cloud environment because most of this information are huge and ranges through many storage nodes. Prevailing methods undergo reduced scalability and incompetence since information is assimilated and accesses all data repeatedly when apprises is done. In this paper, an efficient hash centered quasi-identifier anonymization method is introduced to confirm the confidentiality of the sensitive information and attain great value over spread data sets on cloud. Quasi-identifiers, which signify the groups of anonymized data, are hashed for adeptness. Consequently, a procedure is framed to fulfill this methodology. By this method, when deployed, results validate the effectiveness of confidential conservation on huge data sets that can be amended considerably over existing methods.
Keywords: cloud computing; cryptography; computation control; data confidentiality preservation; distributed data sets; enhanced anonymization algorithm; hash centered quasi-identifier anonymization method; prevailing methods; public cloud computing; sensitive information confidentiality; storage nodes; storing volume; Cloud computing; Computer science; Conferences; Distributed databases; Privacy; Societies; Taxonomy; Cloud Computing; Data anonymization; encryption; privacy; security (ID#: 15-5790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009017&isnumber=7008990
Hassan, S.; Abbas Kamboh, A.; Azam, F., "Analysis of Cloud Computing Performance, Scalability, Availability, & Security," Information Science and Applications (ICISA), 2014 International Conference on, vol., no., pp. 1, 5, 6-9 May 2014. doi:10.1109/ICISA.2014.6847363
Abstract: Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
Keywords: cloud computing; parallel processing; security of data; virtual machines; Internet; parallel computing; scalability; security; virtual machine; Availability; Cloud computing; Computer hacking; Scalability (ID#: 15-5791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847363&isnumber=6847317
Grolinger, K.; Hayes, M.; Higashino, W.A.; L'Heureux, A.; Allison, D.S.; Capretz, M.A.M., "Challenges for MapReduce in Big Data," Services (SERVICES), 2014 IEEE World Congress on, vol., no., pp. 182, 189, June 27 2014-July 2 2014. doi:10.1109/SERVICES.2014.41
Abstract: In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.
Keywords: Big Data; SQL; data analysis; data privacy; learning (artificial intelligence); parallel programming; relational databases; security of data; storage management; Big Data analytics; Big Data community; Big Data project management; Big Data project planning; MapReduce paradigm; NoSQL stores; data security; data storage; interactive analytics; machine learning; massive data sets; massively distributed execution; massively parallel execution; online processing; relational databases; Algorithm design and analysis; Big data; Data models; Data visualization; Memory; Scalability; Security; Big Data Analytics; Interactive Analytics; Machine Learning; MapReduce; NoSQL; Online Processing; Privacy; Security (ID#: 15-5792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903263&isnumber=6903223
Balusamy, M.; Muthusundari, S., "Data Anonymization through Generalization Using Map Reduce on Cloud," Computer Communication and Systems (CCCS), 2014 International Conference on, pp. 039, 042, 20-21 Feb. 2014. doi:10.1109/ICCCS.2014.7068164
Abstract: Now a day's cloud computing provides lot of computation power and storage capacity to the users can be share their private data. To providing the security to the users sensitive data is challenging and difficult one in a cloud environment. K-anonymity approach as far as used for providing privacy to users sensitive data, but cloud can be greatly increases in a big data manner. In the existing, top-town specialization approach to make the privacy of users sensitive data. When the scalability of users data increase means top-town specialization technique is difficult to preserve the sensitive data and provide security to users data. Here we propose the specialization approach through generalization to preserve the sensitive data and provide the security against scalability in an efficient way with the help of map-reduce. Our approach is founding better solution than existing approach in a scalable and efficient way to provide security to users data.
Keywords: cloud computing; data privacy; parallel processing; MapReduce; cloud environment; computation power; data anonymization; generalization; k-anonymity approach; private data sharing; security; storage capacity; top-town specialization approach; user sensitive data privacy; users data scalability; Cloud computing; Computers; Conferences; Data privacy; Privacy; Scalability; Security; Generalization; K-anonymity; Map-Reduce; Specialization; big data (ID#: 15-5793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7068164&isnumber=7068154
Choi, A.J., "Internet of Things: Evolution towards a Hyper-Connected Society," Solid-State Circuits Conference (A-SSCC), 2014 IEEE Asian, vol., no., pp. 5, 8, 10-12 Nov. 2014. doi:10.1109/ASSCC.2014.7008846
Abstract: Internet of Things is expected to encompass every aspect of our lives and to generate a paradigm shift towards a hyper-connected society. As more things are connected to the Internet, larger amount of data are generated and processed into useful actions that can make our lives safer and easier. Since IoT generate heavy traffics, it induces several challenges to next generation network. Therefore, IoT infrastructure should be designed in terms of flexibility and scalability. In addition, cloud computing and big data analytics are being integrated. They allow network to change itself much faster to service requirements with better operational efficiency and intelligence. IoT should also be vertically optimized from device to application in order to provide ultra-low power operation, cost-effectiveness, and service reliability with ensuring full security across the entire signal path. In this paper we address IoT challenges and technological requirements from the service provider perspective.
Keywords: Big Data; Internet; Internet of Things; cloud computing; computer network security; data analysis; data integration; next generation networks; reliability; IoT infrastructure; big data analytics; cost-effectiveness; hyper-connected society; next generation network; service reliability; ultra-low power operation; Business; Cloud computing; Intelligent sensors; Long Term Evolution; Security; IoT; flexiblity; scalability; security (ID#: 15-5794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7008846&isnumber=7008838
Ge Ma; Zhen Chen; Junwei Cao; Zhenhua Guo; Yixin Jiang; Xiaobin Guo, "A Tentative Comparison on CDN and NDN," Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, vol., no., pp. 2893, 2898, 5-8 Oct. 2014. doi:10.1109/SMC.2014.6974369
Abstract: With the pretty prompt growth in Internet content, future Internet is emerging as the main usage shifting from traditional host-to-host model to content dissemination model, e.g. video makes up more than half of Internet traffic. ISPs, content providers and other third parties have widely deployed content delivery networks (CDNs) to support digital content distribution. Though CDN is an ad-hoc solution to the content dissemination problem, there are still big challenges, such as complicated control plane. By contrast, as a wholly new designed network architecture, named data networking (NDN) incorporates content delivery function in its network layer, its stateful routing and forwarding plane can effectively detect and adapt to the dynamic and ever-changing Internet. In this paper, we try to explore the similarities and differences between CDN and NDN. Hence, we evaluate the distribution efficiency, network security and protocol overhead between CDN and NDN. Especially in the implementation phase, we conduct their testbeds separately with the same topology to derive their performance of content delivery. Finally, summarizing our main results, we gather that: 1) NDN has its own advantage on lots of aspects, including security, scalability and quality of service (QoS); 2) NDN make full use of surrounding resources and is more adaptive to the dynamic and ever-changing Internet; 3) though CDN is a commercial and mature architecture, in some scenarios, NDN can perform better than CDN under the same topology and caching storage. In a word, NDN is practical to play an even greater role in the evolution of the Internet based on the massive distribution and retrieval in the future.
Keywords: Internet; quality of service; routing protocols; telecommunication traffic; CDN; ISP; Internet content; Internet traffic; NDN; QoS; complicated control plane; content delivery function; content delivery network; content dissemination model; content dissemination problem; content provider; digital content distribution; distribution efficiency; future Internet; host-to-host model; named data networking; network architecture; network security; pretty prompt growth; protocol overhead; quality of service; stateful routing and forwarding plane; usage shifting; Conferences; Cybernetics; Architecture; comparison; evaluation; named data networking (ID#: 15-5795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974369&isnumber=6973862
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Signature-Based Defenses, 2014 |
Research into the use of malware signatures to inform defensive methods is a standard research exercise for the Science of Security community. The work cited here was published in 2014.
Maria B. Line, Ali Zand, Gianluca Stringhini, Richard Kemmerer. “Targeted Attacks against Industrial Control Systems: Is the Power Industry Prepared?.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 13-22. doi:10.1145/2667190.2667192
Abstract: Targeted cyber attacks are on the rise, and the power industry is an attractive target. Espionage and causing physical damage are likely goals of these targeted attacks. In the case of the power industry, the worst possible consequences are severe: large areas, including critical societal infrastructures, can suffer from power outages. In this paper, we try to measure the preparedness of the power industry against targeted attacks. To this end, we have studied well-known targeted attacks and created a taxonomy for them. Furthermore, we conduct a study, in which we interview six power distribution system operators (DSOs), to assess the level of cyber situation awareness among DSOs and to evaluate the efficiency and effectiveness of their currently deployed systems and practices for detecting and responding to targeted attacks. Our findings indicate that the power industry is very well prepared for traditional threats, such as physical attacks. However, cyber attacks, and especially sophisticated targeted attacks, where social engineering is one of the strategies used, have not been addressed appropriately so far. Finally, by understanding previous attacks and learning from them, we try to provide the industry with guidelines for improving their situation awareness and defense (both detection and response) capabilities.
Keywords: cyber situation awareness, incident management, industrial control systems, information security, interview study, power industry, preparedness, targeted attacks (ID#: 15-5954)
URL: http://doi.acm.org/10.1145/2667190.2667192
Qian Chen, Sherif Abdelwahed. “Towards Realizing Self-Protecting SCADA Systems.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 105-108. doi:10.1145/2602087.2602113
Abstract: SCADA (supervisory control and data acquisition) systems are prime cyber attack targets due to potential impacts on properties, economies, and human lives. Current security solutions, such as firewalls, access controls, and intrusion detection and response systems, can protect SCADA systems from cyber assaults (e.g., denial of service attacks, SQL injection attacks, and spoofing attacks), but they are far from perfect. A new technology is emerging to enable self-protection in SCADA systems. Self-protecting SCADA systems are typically an integration of system behavior monitoring, attack estimation and prevention, known and unknown attack detection, live forensics analysis, and system behavior regulation with appropriate responses. This paper first discusses the key components of a self-protecting SCADA system and then surveys the state-of-the-art research and techniques to the realization of such systems.
Keywords: autonomic computing, cybersecurity, self-protection (ID#: 15-5955)
URL: http://doi.acm.org/10.1145/2602087.2602113
Vijay Anand. “Intrusion Detection: Tools, Techniques and Strategies.” SIGUCCS '14 Proceedings of the 42nd Annual ACM SIGUCCS Conference on User Services, November 2014, Pages 69-73. doi:10.1145/2661172.2661186
Abstract: Intrusion detection is an important aspect of modern cyber-enabled infrastructure in identifying threats to digital assets. Intrusion detection encompasses tools, techniques and strategies to recognize evolving threats thereby contributing to a secure and trustworthy computing framework. There are two primary intrusion detection paradigms, signature pattern matching and anomaly detection. The paradigm of signature pattern matching encompasses the identification of known threat sequences of causal events and matching it to incoming events. If the pattern of incoming events matches the signature of an attack there is a positive match which can be labeled for further processing of countermeasures. The paradigm of anomaly detection is based on the premise that an attack signature is unknown. Events can deviate from normal digital behavior or can inadvertently give out information in normal event processing. These stochastic events have to be evaluated by variety of techniques such as artificial intelligence, prediction models etc. before identifying potential threats to the digital assets in a cyber-enabled system. Once a pattern is identified in the evaluation process after excluding false positives and negative this pattern can be classified as a signature pattern. This paper highlights a setup in an educational environment to effectively flag threats to the digital assets in the system using an intrusion detection framework. Intrusion detection framework comes in two primary formats a network intrusion detection system and a host intrusion detection system. In this paper we identify different publicly available tools of intrusion detection and their effectiveness in a test environment. This paper also looks at the mix of tools that can be deployed to effectively flag threats as they evolve. The effect of encryption in such setup and threat identification with encryption is also studied.
Keywords: anomaly, attacks, honeynet, honeypot, intrusion, pattern, sanitization, virtualized (ID#: 15-5956)
URL: http://doi.acm.org/10.1145/2661172.2661186
Vasilis G. Tasiopoulos, Sokratis K. Katsikas. “Bypassing Antivirus Detection with Encryption.” PCI '14 Proceedings of the 18th Panhellenic Conference on Informatics, October 2014, Pages 1-2. doi:10.1145/2645791.2645857
Abstract: Bypassing an antivirus is a common issue among ethical hackers and penetration testers. Several techniques have been—and are being—used to bypass antivirus software; an effective and efficient one is to encrypt the malware by using special purpose tools, called crypters. In this paper, a novel crypter, which is based on the latest techniques, and can bypass antivirus software is described. The crypter is based on a new architecture that enables it to provide a unique output every time it is used. Testing results indicate that the proposed crypter evades detection by all antivirus in all runs.
Keywords: Antivirus, Crypter, Encryption, Malware (ID#: 15-5957)
URL: http://doi.acm.org/10.1145/2645791.2645857
Joshua Cazalas, J. Todd McDonald, Todd R. Andel, Natalia Stakhanova. “Probing the Limits of Virtualized Software Protection.” PPREW-4 Proceedings of the 4th Program Protection and Reverse Engineering Workshop, December 2014, Article No. 5. doi: 10.1145/2689702.2689707
Abstract: Virtualization is becoming a prominent field of research not only in distributed systems, but also in software protection and obfuscation. Software virtualization has given rise to advanced techniques that may provide intellectual property protection and anti-cloning resilience. We present results of an empirical study that answers whether integrity of execution can be preserved for process-level virtualization protection schemes in the face of adversarial analysis. Our particular approach considers exploits that target the virtual execution environment itself and how it interacts with the underlying host operating system and hardware. We give initial results that indicate such protection mechanisms may be vulnerable at the level where the virtualized code interacts with the underlying operating system. The resolution of whether such attacks can undermine security will help create better detection and analysis methods for malware that also employ software virtualization. Our findings help frame research for additional mitigation techniques using hardware-based integration or hybrid virtualization techniques that can better defend legitimate uses of virtualized software protection.
Keywords: Software protection, obfuscation, process-level virtualization, tamper resistance, virtualized code (ID#: 15-5958)
URL: http://doi.acm.org/10.1145/2689702.2689707
Tsung-Hsuan Ho, Daniel Dean, Xiaohui Gu, William Enck. “PREC: Practical Root Exploit Containment for Android Devices.” CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 187-198. doi:10.1145/2557547.2557563
Abstract: Application markets such as the Google Play Store and the Apple App Store have become the de facto method of distributing software to mobile devices. While official markets dedicate significant resources to detecting malware, state-of-the-art malware detection can be easily circumvented using logic bombs or checks for an emulated environment. We present a Practical Root Exploit Containment (PREC) framework that protects users from such conditional malicious behavior. PREC can dynamically identify system calls from high-risk components (e.g., third-party native libraries) and execute those system calls within isolated threads. Hence, PREC can detect and stop root exploits with high accuracy while imposing low interference to benign applications. We have implemented PREC and evaluated our methodology on 140 most popular benign applications and 10 root exploit malicious applications. Our results show that PREC can successfully detect and stop all the tested malware while reducing the false alarm rates by more than one order of magnitude over traditional malware detection algorithms. PREC is light-weight, which makes it practical for runtime on-device root exploit detection and containment.
Keywords: android, dynamic analysis, host intrusion detection, malware, root exploits (ID#: 15-5959)
URL: http://doi.acm.org/10.1145/2557547.2557563
Tobias Wüchner, Martín Ochoa, Alexander Pretschner. “Malware Detection with Quantitative Data Flow Graphs.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 271-282. doi:10.1145/2590296.2590319
Abstract: We propose a novel behavioral malware detection approach based on a generic system-wide quantitative data flow model. We base our data flow analysis on the incremental construction of aggregated quantitative data flow graphs. These graphs represent communication between different system entities such as processes, sockets, files or system registries. We demonstrate the feasibility of our approach through a prototypical instantiation and implementation for the Windows operating system. Our experiments yield encouraging results: in our data set of samples from common malware families and popular non-malicious applications, our approach has a detection rate of 96% and a false positive rate of less than 1.6%. In comparison with closely related data flow based approaches, we achieve similar detection effectiveness with considerably better performance: an average full system analysis takes less than one second.
Keywords: behavioral malware analysis, data flow tracking, intrusion detection, malware detection, quantitative data flows (ID#: 15-5960)
URL: http://doi.acm.org/10.1145/2590296.2590319
Mikhail Kazdagli, Ling Huang, Vijay Reddi, Mohit Tiwari. “Morpheus: Benchmarking Computational Diversity in Mobile Malware.” HASP '14 Proceedings of the Third Workshop on Hardware and Architectural Support for Security and Privacy, June 2014, Article No. 3. doi:10.1145/2611765.2611767
Abstract: Computational characteristics of a program can potentially be used to identify malicious programs from benign ones. However, systematically evaluating malware detection techniques, especially when malware samples are hard to run correctly and can adapt their computational characteristics, is a hard problem. We introduce Morpheus—a benchmarking tool that includes both real mobile malware and a synthetic malware generator that can be configured to generate a computationally diverse malware sample-set—as a tool to evaluate computational signatures based malware detection. Morpheus also includes a set of computationally diverse benign applications that can be used to repackage malware into, along with a recorded trace of over 1 hour long realistic human usage for each app that can be used to replay both benign and malicious executions. The current Morpheus prototype targets Android applications and malware samples. Using Morpheus, we quantify the computational diversity in malware behavior and expose opportunities for dynamic analyses that can detect mobile malware. Specifically, the use of obfuscation and encryption to thwart static analyses causes the malicious execution to be more distinctive—a potential opportunity for detection. We also present potential challenges, specifically, minimizing false positives that can arise due to diversity of benign executions.
Keywords: mobile malware, performance counters, security (ID#: 15-5961)
URL: http://doi.acm.org/10.1145/2611765.2611767
Mingshen Sun, Min Zheng, John C. S. Lui, Xuxian Jiang. “Design and Implementation of an Android Host-Based Intrusion Prevention System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 226-235. doi:10.1145/2664243.2664245
Abstract: Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---"Patronus." Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption.
Keywords: (not provided) (ID#: 15-5962)
URL: http://doi.acm.org/10.1145/2664243.2664245
Sean Whalen, Nathaniel Boggs, Salvatore J. Stolfo. “Model Aggregation for Distributed Content Anomaly Detection.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 61-71. doi:10.1145/2666652.2666660
Abstract: Cloud computing offers a scalable, low-cost, and resilient platform for critical applications. Securing these applications against attacks targeting unknown vulnerabilities is an unsolved challenge. Network anomaly detection addresses such zero-day attacks by modeling attributes of attack-free application traffic and raising alerts when new traffic deviates from this model. Content anomaly detection (CAD) is a variant of this approach that models the payloads of such traffic instead of higher level attributes. Zero-day attacks then appear as outliers to properly trained CAD sensors. In the past, CAD was unsuited to cloud environments due to the relative overhead of content inspection and the dynamic routing of content paths to geographically diverse sites. We challenge this notion and introduce new methods for efficiently aggregating content models to enable scalable CAD in dynamically-pathed environments such as the cloud. These methods eliminate the need to exchange raw content, drastically reduce network and CPU overhead, and offer varying levels of content privacy. We perform a comparative analysis of our methods using Random Forest, Logistic Regression, and Bloom Filter-based classifiers for operation in the cloud or other distributed settings such as wireless sensor networks. We find that content model aggregation offers statistically significant improvements over non-aggregate models with minimal overhead, and that distributed and non-distributed CAD have statistically indistinguishable performance. Thus, these methods enable the practical deployment of accurate CAD sensors in a distributed attack detection infrastructure.
Keywords: anomaly detection, machine learning, model aggregation (ID#: 15-5963)
URL: http://doi.acm.org/10.1145/2666652.2666660
Tamas K. Lengyel, Steve Maresca, Bryan D. Payne, George D. Webster, Sebastian Vogl, Aggelos Kiayias. “Scalability, Fidelity and Stealth in the DRAKVUF Dynamic Malware Analysis System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 386-395. doi:10.1145/2664243.2664252
Abstract: Malware is one of the biggest security threats on the Internet today and deploying effective defensive solutions requires the rapid analysis of a continuously increasing number of malware samples. With the proliferation of metamorphic malware the analysis is further complicated as the efficacy of signature-based static analysis systems is greatly reduced. While dynamic malware analysis is an effective alternative, the approach faces significant challenges as the ever increasing number of samples requiring analysis places a burden on hardware resources. At the same time modern malware can both detect the monitoring environment and hide in unmonitored corners of the system. In this paper we present DRAKVUF, a novel dynamic malware analysis system designed to address these challenges by building on the latest hardware virtualization extensions and the Xen hypervisor. We present a technique for improving stealth by initiating the execution of malware samples without leaving any trace in the analysis machine. We also present novel techniques to eliminate blind-spots created by kernel-mode rootkits by extending the scope of monitoring to include kernel internal functions, and to monitor file-system accesses through the kernel's heap allocations. With extensive tests performed on recent malware samples we show that DRAKVUF achieves significant improvements in conserving hardware resources while providing a stealthy, in-depth view into the behavior of modern malware.
Keywords: dynamic malware analysis, virtual machine introspection (ID#: 15-5964)
URL: http://doi.acm.org/10.1145/2664243.2664252
David Barrera, Daniel McCarney, Jeremy Clark, Paul C. van Oorschot. “Baton: Certificate Agility for Android's Decentralized Signing Infrastructure.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks, July 2014, Pages 1-12. doi:10.1145/2627393.2627397
Abstract: Android's trust-on-first-use application signing model associates developers with a fixed code signing certificate, but lacks a mechanism to enable transparent key updates or certificate renewals. The model allows application updates to be recognized as authorized by a party with access to the original signing key. However, changing keys or certificates requires that end users manually uninstall/reinstall apps, losing all non-backed up user data. In this paper, we show that with appropriate OS support, developers can securely and without user intervention transfer signing authority to a new signing key. Our proposal, Baton, modifies Android's app installation framework enabling key agility while preserving backwards compatibility with current apps and current Android releases. Baton is designed to work consistently with current UID sharing and signature permission requirements. We discuss technical details of the Android-specific implementation, as well as the applicability of the Baton protocol to other decentralized environments.
Keywords: android, application signing, mobile operating systems (ID#: 15-5965)
URL: http://doi.acm.org/10.1145/2627393.2627397
Todd R. Andel, Lindsey N. Whitehurst, Jeffrey T. McDonald. “Software Security and Randomization through Program Partitioning and Circuit Variation.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 79-86. doi:10.1145/2663474.2663484
Abstract: The commodity status of Field Programmable Gate Arrays (FPGAs) has allowed computationally intensive algorithms, such as cryptographic protocols, to take advantage of faster hardware speed while simultaneously leveraging the reconfigurability and lower cost of software. Numerous security applications have been transitioned into FPGA implementations allowing security applications to operate at real-time speeds, such as firewall and packet scanning on high speed networks. However, the utilization of FPGAs to directly secure software vulnerabilities is seemingly non-existent. Protecting program integrity and confidentiality is crucial as malicious attacks through injected code are becoming increasingly prevalent. This paper lays the foundation of continuing research in how to protect software by partitioning critical sections using reconfigurable hardware. This approach is similar to a traditional coprocessor approach to scheduling opcodes for execution on specialized hardware as opposed to running on the native processor. However, the partitioned program model enables the programmer the ability to split portions of an application to reconfigurable hardware at compile time. The fundamental underlying hypothesis is that synthesizing portions of programs onto hardware can mitigate potential software vulnerabilities. Further, this approach provides an avenue for randomization or diversity for software layout and circuit variation.
Keywords: circuit variation, program protection, reconfigurable hardware, secure software, software partitioning (ID#: 15-5966)
URL: http://doi.acm.org/10.1145/2663474.2663484
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Virtualization Privacy Auditing |
With the growth of Cloud applications, the problems of security and privacy are growing. Determining whether security is working and privacy is being protected requires the ability to successfully audit. Such audits not only help to determine the protection, but also provide data to inform the development of metrics. The research presented here is current in 2014 as of July 21.
Denzil Ferreira, Vassilis Kostakos, Alastair R. Beresford, Janne Lindqvist, Anind K. Dey. “Securacy: An Empirical Investigation of Android Applications' Network Usage, Privacy and Security.” WiSec '15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 11. doi:10.1145/2766498.2766506
Abstract: Smartphone users do not fully know what their apps do. For example, an applications' network usage and underlying security configuration is invisible to users. In this paper we introduce Securacy, a mobile app that explores users' privacy and security concerns with Android apps. Securacy takes a reactive, personalized approach, highlighting app permission settings that the user has previously stated are concerning, and provides feedback on the use of secure and insecure network communication for each app. We began our design of Securacy by conducting a literature review and in-depth interviews with 30 participants to understand their concerns. We used this knowledge to build Securacy and evaluated its use by another set of 218 anonymous participants who installed the application from the Google Play store. Our results show that access to address book information is by far the biggest privacy concern. Over half (56.4%) of the connections made by apps are insecure, and the destination of the majority of network traffic is North America, regardless of the location of the user. Our app provides unprecedented insight into Android applications' communications behavior globally, indicating that the majority of apps currently use insecure network connections.
Keywords: applications, context, experience sampling, network, privacy (ID#: 15-5942)
URL: http://doi.acm.org/10.1145/2766498.2766506
Syed Rizvi, Jungwoo Ryoo, John Kissell, Bill Aiken. “A Stakeholder-Oriented Assessment Index for Cloud Security Auditing.” IMCOM '15 Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication, January 2015, Article No. 55. doi:10.1145/2701126.2701226
Abstract: Cloud computing is an emerging computing model that provides numerous advantages to organizations (both service providers and customers) in terms of massive scalability, lower cost, and flexibility, to name a few. Despite these technical and economical advantages of cloud computing, many potential cloud consumers are still hesitant to adopt cloud computing due to security and privacy concerns. This paper describes some of the unique cloud computing security factors and subfactors that play a critical role in addressing cloud security and privacy concerns. To mitigate these concerns, we develop a security metric tool to provide information to cloud users about the security status of a given cloud vendor. The primary objective of the proposed metric is to produce a security index that describes the security level accomplished by an evaluated cloud computing vendor. The resultant security index will give confidence to different cloud stakeholders and is likely to help them in decision making, increase the predictability of the quality of service, and allow appropriate proactive planning if needed before migrating to the cloud. To show the practicality of the proposed metric, we provide two case studies based on the available security information about two well-known cloud service providers (CSP). The results of these case studies demonstrated the effectiveness of the security index in determining the overall security level of a CSP with respect to the security preferences of cloud users.
Keywords: cloud auditing, cloud security, data privacy, security metrics (ID#: 15-5943)
URL: http://doi.acm.org/10.1145/2701126.2701226
V. Padmapriya, J. Amudhavel, M. Thamizhselvi, K. Bakkiya, B. Sujitha, K. Prem Kumar. “A Scalable Service Oriented Consistency Model for Cloud Environment (SSOCM).” ICARCSET '15 Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), March 2015, Article No. 24. doi:10.1145/2743065.2743089
Abstract: The cloud computing paradigm is located throughout the world which is not only used to gather the user's information but also allows the user to share the information among them. In the existing systems, they have discussed about trace-based verification and auditing consistency model on the worldwide scale, which is very expensive to achieve strong consistency. Most of the consistency is achieved during security operations in the cloud domain with violations. Consistency is easy to integrate with multiple servers and even to maintain it under replication. In our proposed system, the users can be able to easily assess the quality of the cloud service and also choose a precise consistency service provider (CSP) among various applicants. Here a theoretical study of consistency model in cloud computing is conducted thoroughly. Finally, we devise an algorithm and a theorem such as: Heuristic Auditing Strategy (HAS) along with the Consistency, Availability and Partition tolerance (CAP) theorem, where the users can easily assess the best quality of the cloud service and also to choose a right consistency service provider (CSP) among various candidates.
Keywords: Consistency, auditing consistency, consistency service provider (CSP), heuristic strategy (HAS) consistency availability and partition tolerance (CAP) (ID#: 15-5944)
URL: http://doi.acm.org/10.1145/2743065.2743089
Shanhe Yi, Cheng Li, Qun Li. “A Survey of Fog Computing: Concepts, Applications and Issues.” Mobidata '15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 37-42. doi:10.1145/2757384.2757397
Abstract: Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.
Keywords: cloud computing, edge computing, fog computing, mobile cloud computing, mobile edge computing, review (ID#: 15-5945)
URL: http://doi.acm.org/10.1145/2757384.2757397
Tianwei Zhang, Ruby B. Lee. “CloudMonatt: An Architecture for Security Health Monitoring and Attestation of Virtual Machines in Cloud Computing.” ISCA '15 Proceedings of the 42nd Annual International Symposium on Computer Architecture, June 2015, Pages 362-374. doi:10.1145/2749469.2750422
Abstract: Cloud customers need guarantees regarding the security of their virtual machines (VMs), operating within an Infrastructure as a Service (IaaS) cloud system. This is complicated by the customer not knowing where his VM is executing, and on the semantic gap between what the customer wants to know versus what can be measured in the cloud. We present an architecture for monitoring a VM's security health, with the ability to attest this to the customer in an unforgeable manner. We show a concrete implementation of property-based attestation and a full prototype based on the OpenStack open source cloud software.
Keywords: (not provided) (ID#: 15-5946)
URL: http://doi.acm.org/10.1145/2749469.2750422
Yubin Xia, Yutao Liu, Cheng Tan, Mingyang Ma, Haibing Guan, Binyu Zang, Haibo Chen. “TinMan: Eliminating Confidential Mobile Data Exposure with Security Oriented Offloading.” EuroSys '15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 27. doi:10.1145/2741948.2741977
Abstract: The wide adoption of smart devices has stimulated a fast shift of security-critical data from desktop to mobile devices. However, recurrent device theft and loss expose mobile devices to various security threats and even physical attacks. This paper presents TinMan, a system that protects confidential data such as web site password and credit card number (we use the term cor to represent these data, which is short for Confidential Record) from being leaked or abused even under device theft. TinMan separates accesses of cor from the rest of the functionalities of an app, by introducing a trusted node to store cor and offloading any code from a mobile device to the trusted node to access cor. This completely eliminates the exposure of cor on the mobile devices. The key challenges to TinMan include deciding when and how to efficiently and transparently offload execution; TinMan addresses these challenges with security-oriented offloading with a low-overhead tainting scheme called asymmetric tainting to track accesses to cor to trigger offloading, as well as transparent SSL session injection and TCP pay-load replacement to offload accesses to cor. We have implemented a prototype of TinMan based on Android and demonstrated how TinMan protects the information of user's bank account and credit card number without modifying the apps. Evaluation results also show that TinMan incurs only a small amount of performance and power overhead.
Keywords: (not provided) (ID#: 15-5947)
URL: http://doi.acm.org/10.1145/2741948.2741977
Marshini Chetty, Hyojoon Kim, Srikanth Sundaresan, Sam Burnett, Nick Feamster, W. Keith Edwards. “uCap: An Internet Data Management Tool for the Home. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 3093-3102. doi:10.1145/2702123.2702218
Abstract: Internet Service Providers (ISPs) have introduced "data caps", or quotas on the amount of data that a customer can download during a billing cycle. Under this model, Internet users who reach a data cap can be subject to degraded performance, extra fees, or even temporary interruption of Internet service. For this reason, users need better visibility into and control over their Internet usage to help them understand what uses up data and control how these quotas are reached. In this paper, we present the design and implementation of a tool, called uCap, to help home users manage Internet data. We conducted a field trial of uCap in 21 home networks in three countries and performed an in-depth qualitative study of ten of these homes. We present the results of the evaluation and implications for the design of future Internet data management tools.
Keywords: bandwidth caps, data caps, home networking tools (ID#: 15-5948)
URL: http://doi.acm.org/10.1145/2702123.2702218
Robert Cowles, Craig Jackson, Von Welch. “Facilitating Scientific Collaborations by Delegating Identity Management: Reducing Barriers & Roadmap for Incremental Implementation.” CLHS '15 Proceedings of the 2015 Workshop on Changing Landscapes in HPC Security, April 2015, Pages 15-19. doi:10.1145/2752499.2752501
Abstract: DOE Labs are often presented with conflicting requirements for providing services to scientific collaboratories. An identity management model involving transitive trust is increasingly common. We show how existing policies allow for increased delegation of identity management within an acceptable risk management framework. Specific topics addressed include deemed exports, DOE orders, Inertia and Risk, Traceability, and Technology Limitations. Real life examples of an incremental approach to implementing transitive trust are presented.
Keywords: access control, cyber security, delegation, identity, identity management, risk management, transitive trust (ID#: 15-5949)
URL: http://doi.acm.org/10.1145/2752499.2752501
Qiang Liu, Edith C.-H. Ngai, Xiping Hu, Zhengguo Sheng, Victor C.M. Leung, Jianping Yin. “SH-CRAN: Hierarchical Framework to Support Mobile Big Data Computing in a Secure Manner.” Mobidata '15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 19-24. doi:10.1145/2757384.2757388
Abstract: The heterogeneous cloud radio access network (H-CRAN) has been emerging as a cost-effective solution supporting huge volumes of mobile traffic in the big data era. This paper investigates potential security challenges on H-CRAN and analyzes their likelihoods and difficulty levels. Typically, the security threats in H-CRAN can be categorized into three groups, i.e., security threats towards remote radio heads (RRHs), those towards the radio cloud infrastructure and towards backhaul networks. To overcome challenges made by the security threats, we propose a hierarchical security framework called Secure H-CRAN (SH-CRAN) to protect the H-CRAN system against the potential threats. Specifically, the architecture of SH-CRAN contains three logically independent secure domains (SDs), which are the SDs of radio cloud infrastructure, RRHs and backhauls. The notable merits of SH-CRAN include two aspects: (i) the proposed framework is able to provide security assurance for the evolving H-CRAN system, and (ii) the impacts of any failure are limited in one specific component of H-CRAN. The proposed SH-CRAN can be regarded as the basis of the future security mechanisms of mobile bag data computing.
Keywords: heterogeneous cloud radio access network, hierarchical security framework, mobile big data computing (ID#: 15-5950)
URL: http://doi.acm.org/10.1145/2757384.2757388
Jun Wang, Zhiyun Qian, Zhichun Li, Zhenyu Wu, Junghwan Rhee, Xia Ning, Peng Liu, Guofei Jiang. “Discover and Tame Long-running Idling Processes in Enterprise Systems.” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 543-554. doi:10.1145/2714576.2714613
Abstract: Reducing attack surface is an effective preventive measure to strengthen security in large systems. However, it is challenging to apply this idea in an enterprise environment where systems are complex and evolving over time. In this paper, we empirically analyze and measure a real enterprise to identify unused services that expose attack surface. Interestingly, such unused services are known to exist and summarized by security best practices, yet such solutions require significant manual effort. We propose an automated approach to accurately detect the idling (most likely unused) services that are in either blocked or bookkeeping states. The idea is to identify repeating events with perfect time alignment, which is the indication of being idling. We implement this idea by developing a novel statistical algorithm based on autocorrelation with time information incorporated. From our measurement results, we find that 88.5% of the detected idling services can be constrained with a simple syscall-based policy, which confines the process behaviors within its bookkeeping states. In addition, working with two IT departments (one of which is a cross validation), we receive positive feedbacks which show that about 30.6% of such services can be safely disabled or uninstalled directly. In the future, the IT department plan to incorporate the results to build a "smaller" OS installation image. Finally, we believe our measurement results raise the awareness of the potential security risks of idling services.
Keywords: attack surface reduction, autocorrelation, enterprise systems, idling service detection (ID#:15-5951)
URL: http://doi.acm.org/10.1145/2714576.2714613
Patrick Colp, Jiawen Zhang, James Gleeson, Sahil Suneja, Eyal de Lara, Himanshu Raj, Stefan Saroiu, Alec Wolman. “Protecting Data on Smartphones and Tablets from Memory Attacks.” ASPLOS '15 Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 177-189. doi:10.1145/2694344.2694380
Abstract: Smartphones and tablets are easily lost or stolen. This makes them susceptible to an inexpensive class of memory attacks, such as cold-boot attacks, using a bus monitor to observe the memory bus, and DMA attacks. This paper describes Sentry, a system that allows applications and OS components to store their code and data on the System-on-Chip (SoC) rather than in DRAM. We use ARM-specific mechanisms originally designed for embedded systems, but still present in today's mobile devices, to protect applications and OS subsystems from memory attacks.
Keywords: AES, DMA attack, android, arm, bus monitoring, cache, cold boot, encrypted RAM, encrypted memory, iRAM, nexus, tegra (ID#: 15-5952)
URL: http://doi.acm.org/10.1145/2694344.2694380
Anjo Vahldiek-Oberwagner, Eslam Elnikety, Aastha Mehta, Deepak Garg, Peter Druschel, Rodrigo Rodrigues, Johannes Gehrke, Ansley Post. “Guardat: Enforcing Data Policies at the Storage Layer.” EuroSys '15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 13. doi:10.1145/2741948.2741958
Abstract: In today's data processing systems, both the policies protecting stored data and the mechanisms for their enforcement are spread over many software components and configuration files, increasing the risk of policy violation due to bugs, vulnerabilities and misconfigurations. Guardat addresses this problem. Users, developers and administrators specify file protection policies declaratively, concisely and separate from code, and Guardat enforces these policies by mediating I/O in the storage layer. Policy enforcement relies only on the integrity of the Guardat controller and any external policy dependencies. The semantic gap between the storage layer enforcement and per-file policies is bridged using cryptographic attestations from Guardat. We present the design and prototype implementation of Guardat, enforce example policies in a Web server, and show experimentally that its overhead is low.
Keywords: (not provided) (ID#: 15-5953)
URL: http://doi.acm.org/10.1145/2741948.2741958
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Compendium of Science of Security Articles of Interest |
The following material details some recent activity associated with the Science of Security, including relevant workshops and articles. Some of the items have been published elsewhere and some may be published in the future. We invite you to read the articles below and direct your comments and questions to the Science of Security Virtual Organization (SoS-VO) at http://cps-vo.org/group/SoS using the Contact tab on the left.
(ID#: 15-5933)
Adoption of Cybersecurity Technology Workshop |
The Special Cyber Operations Research and Engineering (SCORE) Subcommittee sponsored the 2015 Adoption of Cybersecurity Technology (ACT) workshop at the Sandia National Laboratories in Albuquerque, New Mexico from 3-5 March 2015. As a community, researchers and developers have produced many effective cybersecurity technology solutions that are not implemented for a variety of reasons. Many cybersecurity professionals believe that 80% of the current problems in cyberspace have known solutions that have not been implemented. In order to illuminate systemic barriers to adoption of security measures, the workshop focused specifically on countering the phishing threat and its aftermath. The vision for the workshop was to change business practices for adoption of cybersecurity technologies; expose developers, decision makers, and implementers to others’ perspectives; address the technology, process, and usability roadblocks to adoption; and build a Community of Interest (COI) to engage regularly.
This was the first in what is expected to be an annual workshop to address issues associated with barriers to adoption of cybersecurity technologies. The workshop, itself, however, was simply the kickoff activity for a sustained effort to implement cybersecurity technology solutions throughout the US Government. Workshop participants were primarily government personnel, with some individuals from Federally Funded Research and Development Centers (FFRDCs), academia, and industry.
Figure 1: Overview of organizations participating in the ACT 2015 Workshop
There were four groups of attendees representing researchers and developers, decision-makers, implementers, and human behavior experts. Participants explored, developed, and implemented action plans for four use cases that addressed each of the four fundamental cybersecurity goals shown below: Device Integrity; Authentication and Credential Protection/Defense of Accounts; Damage Containment; and Secure and Available Transport, and how they are applied in the attack lifecycle. This construct provided the workshop with a framework that allowed participants to apply critical solutions specifically against the spear phishing threat.
Figure 2: Key areas necessary for success
Figure 3: Mitigations Framework
Participants in the workshop identified systemic issues preventing the adoption of such solutions and suggested how to change business practices to enable these cybersecurity technology practices to be adopted. The agenda included briefings on specific threat scenarios, briefings on cohorts’ concerns to promote understanding among groups, facilitated sessions that addressed the four use cases, and the development of action plans to be implemented via 90 day spins.
The First Day established the framework for the remainder of the workshop. There were two introductory briefings that focused on the phishing threat, one classified and one unclassified. The unclassified briefing “Phishing from the Front Lines” was presented by Matt Hastings, a Senior Consultant with Mandiant, a division of FireEye, Inc. Following a description of workshop activities, individuals associated with each of the four cohorts met to identify and then share the specific barriers to the adoption of cybersecurity technologies that they have experienced as developers, decision-makers, implementers, and human behavior specialists. After a working lunch that included a briefing on Secure Coding from Robert Seacord, Secure Coding Technical Manager at the Computer Emergency Response Team Division at the Software Engineering Institute, Carnegie Mellon University (CERT/SEI/CMU), participants were briefed on NSA’s Integrated Mitigations Framework and the Use Case Construct and Descriptions that would form the basis of the remainder of the workshop.
On Day Two, after participants received their use case assignments based on their stated interests and their roles, Dr. Douglas Maughan, Director of the Department of Homeland Security’s Science and Technology Directorate’s Cyber Security Division presented “Bridging the Valley of Death,” a briefing designed to help workshop participants identify how to overcome barriers to cybersecurity technology adoption. The first breakout session, Discovery, allowed participants to consider what technologies and/or best practices could be implemented over the next year. A lunchtime briefing on Technology Readiness Levels and the presentation of Success Stories by workshop participants, provided a good segue into the second breakout session, Formulation of Action Plans.
Dr. Greg Shannon, Chief Scientist of SEI/CMU, presented “Accelerating Innovation in Security and Privacy Technologies—IEEE Cybersecurity Initiative” to start Day Three. The third use case breakout session allowed use case participants to identify the next steps for implementation, after which all of the use cases presented their plans.
Use case participants identified plans for each of the four 90-day Spins that they will brief to the ACT Organizing Committee and threat analysts who will assess progress against the goals. The Spin reports will address successes, challenges, and the specific steps taken to overcome roadblocks to the realization of the adoption of cybersecurity technologies. All of the Spin meetings will include updates from those responsible for implementing the chosen technology as well as use case team breakout sessions after the presentations. The Organizing Committee is now working on the specific logistics details for the four Spins. Spin 1 will be a half day event held in the DC area during the week of 15 June; Spin 2 will be a day-long meeting held within a few hours of the DC area sometime in mid-September; the location of the Spin 3 meeting is still to be determined, but it will be a half day event held in early December. The final spin will coincide with the second ACT workshop and will be held at Sandia Labs in mid-March 2016.
The participants were fully engaged in the two and a half day workshop and demonstrated commitment to both the concept of the workshop and to the follow-up activities. In providing feedback, 33 of the 35 respondents found value in attending, and 32 of the 35 would be interested in participating in the next workshop.
The goal over the next year is to strengthen government networks against spear phishing attacks by applying the selected technologies. Through the identification and subsequent removal of barriers to the adoption of these specific technologies, the use cases will identify ways to reduce obstacles to implementing known solutions to known problems, thus enabling more research to bridge the valley of death. Although the activity over the next year is tactical in nature, it provides an underlying strategy for achieving broader objectives as well as a foundation enabling transition from addressing threats on a transactional basis to collaborative cybersecurity engagements. Ultimately, ACT will strengthen our nation’s ability to address cybersecurity threats and improve our ability to make more of a difference more of the time.
(ID#: 15-5934)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Building Secure and Resilient Software from the Start |
NC State Lablet addresses soft underbelly of software, resilience and measurement.
The hard problems of resilience and predictive metrics are being addressed by the team of knowledgeable and experienced researchers at the NC State University Science of Security Lablet. The two Principal Investigators (PI), Laurie Williams and Mladen Vouk have worked extensively in industry, as well as academe. Their experience brings a practical dimension to solving software-related hard problems, and their management skills have generated a well-organized and implemented research agenda.
Their general approach to software security is to build secure software from the start, to build in rather than bolt on, security. They seek to prevent security vulnerabilities in software systems. A security vulnerability is an instance of a fault in a program that violates an implicit or explicit security policy.
Using empirical analysis, they have determined security vulnerabilities can be introduced into a system because the code is too complex, changed too often, or not changed appropriately. These potential causes, both technical and behavioral, are then captured in their software metrics. They then examine whether statistical correlations between software metrics and security vulnerabilities exist in a given system. One NCSU study found that source code files changed by nine developers or more were 16 times more likely to have at least one post-release security vulnerability—many hands make, not light work, but poor work from a security perspective. From such analyses, predictive models and useful statistical associations to guide the development of secure software can be developed and disseminated.
Resilience of software to attacks is an open problem. According to NCSU researchers, two questions arise. First, if highly attack-resilient components and appropriate attack sensors are developed, will it become possible to compose a resilient system from these component parts? If so, how does that system scale and age? Finding the answers to these questions requires rigorous analysis and testing. Resilience, they say, depends on the science as well as the engineering behind the approach used. For example, a very simple and effective defensive strategy is to force attackers to operate under a “normal” operational profile of an application by building a dynamic application firewall in, so that it does not respond to “odd” or out of norm inputs. While not fool-proof, a normal operational profile appears to be less vulnerable, and such a device may be quite resistant to zero-day attacks.
The research has generated tangible results. Three recent papers have been written as a result of this research. "On Coverage-Based Attack Profiles," by Anthony Rivers, Mladen Vouk, and Laurie Williams; "A Survey of Common Security Vulnerabilities and Corresponding Countermeasures for SaaS," by Donhoon Kim, Vouk, and Williams; and “Diversity-based Detection of Security Anomalies,” by Roopak Venkatakrishnan and Vouk. (The last was presented at the Symposium and Bootcamp on the Science of Security, HOT SoS 2014.)
Bibliographical citations and more detailed descriptions of the research follow.
Rivers, A.T.; Vouk, M.A.; Williams, L.A.; "On Coverage-Based Attack Profiles"; Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, vol., no., pp. 5, 6, June 30 2014-July 2 2014. doi:10.1109/SERE-C.2014.15
Abstract: Automated cyber attacks tend to be schedule and resource limited. The primary progress metric is often "coverage" of pre-determined "known" vulnerabilities that may not have been patched, along with possible zero-day exploits (if such exist). We present and discuss a hypergeometric process model that describes such attack patterns. We used web request signatures from the logs of a production web server to assess the applicability of the model.
Keywords: Internet; security of data; Web request signatures; attack patterns; coverage-based attack profiles; cyber attacks; hypergeometric process model; production Web server; zero-day exploits; Computational modeling; Equations; IP networks; Mathematical model; Software; Software reliability; Testing; attack; coverage; models; profile; security
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901633&isnumber=6901618
Kim, Donghoon; Vouk, Mladen A.; "A Survey of Common Security Vulnerabilities and Corresponding Countermeasures for SaaS"; Globecom Workshops, 2014, vol., no., pp. 59, 63, 8-12 Dec. 2014. doi:10.1109/GLOCOMW.2014.7063386
Abstract: Software as a Service (SaaS) is the most prevalent service delivery mode for cloud systems. This paper surveys common security vulnerabilities and corresponding countermeasures for SaaS. It is primarily focused on the work published in the last five years. We observe current SaaS security trends and a lack of sufficiently broad and robust countermeasures in some of the SaaS security area such as Identity and Access management due to the growth of SaaS applications.
Keywords: Authentication; Cloud computing; Google; Software as a service; XML
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7063386&isnumber=7063320
Roopak Venkatakrishnan, Mladen A. Vouk; "Diversity-based Detection of Security Anomalies"; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 29. doi:10.1145/2600176.2600205
Abstract: Detecting and preventing attacks before they compromise a system can be done using acceptance testing, redundancy based mechanisms, and using external consistency checking such external monitoring and watchdog processes. Diversity-based adjudication, is a step towards an oracle that uses knowable behavior of a healthy system. That approach, under best circumstances, is able to detect even zero-day attacks. In this approach we use functionally equivalent but in some way diverse components and we compare their output vectors and reactions for a given input vector. This paper discusses practical relevance of this approach in the context of recent web-service attacks.
Keywords: attack detection, diversity, redundancy in security, web services
URL: http://doi.acm.org/10.1145/2600176.2600205
Dr. Laurie Williams is the Acting Department Head of Computer Science, a Professor in the Department and co-director of the NCSU Science of Security Lablet. Her research focuses on software security in healthcare IT, agile software development, software reliability, and software testing and analysis. She has published extensively on these topics and on electronic commerce, information and knowledge management, as well as cyber security and software engineering and programming languages.
Email: williams@csc.ncsu.edu
Web page: http://collaboration.csc.ncsu.edu/laurie/
(ID#: 15-5940)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
CMU Fields Cloud-Based Sandbox |
CMU fields cloud-based sandbox as part of a broader study on modeling and measuring sandbox encapsulation.
“Sandboxes” are security layers for encapsulating components of software systems and imposing security policies to regulate the interactions between the isolated components and the rest of the system. Sandboxing is a common technique to secure components of systems that cannot be fully verified or trusted. Sandboxing is used both to protect systems from potentially dangerous components and also to protect critical components from the other parts of the overall system. Sandboxes have a role in metrics research because they provide a mechanism to work with a full sampling of components and applications, even when they are known to be malicious. This ability to work with dangerous materials can improve the validity of the resulting metrics. Of course, if the sandbox fails or is bypassed, then the production environment may be compromised.
In their research work “In-Nimbo Sandboxing,” researchers Michael Maass, William L. Scherlis, and Jonathan Aldrich from the Institute for Software Research in the School of Computer Science at Carnegie Mellon University propose a method to mitigate the risk of sandbox failure and raise the bar for potential threats. Using a technique they liken to software-as-a-service, they propose a cloud-based sandbox approach that focuses on tailoring the sandbox to the specific application. Tailoring provides flexibility in attack surface design. They encapsulate components with smaller, more defensible attack surfaces compared to other techniques. Their remote encapsulation reduces both sides of the risk product, the likelihood of attack success and the magnitude or degree of consequence of damage resulting from an attack, were it to be successful against the sandboxed component. They call their approach in nimbo sandboxing, after the Latin word for cloud.
Maass, Scherlis, and Aldrich assessed their approach on the basis of three principal criteria: performance, usability, and security. They conducted a field-trial deployment with a major aerospace firm, and were able to compare an encapsulated component deployed in an enterprise-managed cloud with the original version of the component deployed in the relatively higher-value user environment without encapsulation. In evaluating performance data, they focus on latency and ignore resource consumption. For the applications that were deployed, the technique only slightly increases user-perceived latency of interactions. In evaluating usability, the design of the sandbox mechanism was structured to present an experience identical to the local version, and users judged that this was accomplished. Another dimension of usability is the difficulty to developers of creating and deploying in-nimbo sandboxes for other applications. The field trial system is built primarily using widely adopted established components, and the authors indicate that the approach may be feasible for a variety of systems.
Cloud-based sandboxes allow defenders to customize the computing environment in which an encapsulated computation takes place, thereby making it more difficult to attack. And, since cloud environments are by nature “approximately ephemeral,” it also becomes more difficult for attackers to achieve both effects and persistence in their attacks. Their term “ephemeral” refers to an ideal computing environment with a short, isolated, and non-persistent existence. Even if persistence is achieved, the cloud computing environment offers much less benefit in a successful attack as compared, for example, with the relatively higher value of an end-user desktop environment.
According to the authors, “most mainstream sandboxing techniques are in-situ, meaning they impose security policies using only Trusted Computing Bases (TCBs) within the system being defended. Existing in-situ sandboxing approaches decrease the risk that a vulnerability will be successfully exploited, because they force the attacker to chain multiple vulnerabilities together or bypass the sandbox. Unfortunately, in practice these techniques still leave a significant attack surface, leading to a number of attacks that succeed in defeating the sandbox.”
The overall in-nimbo system is structured to separate a component of interest from its operating environment. The component of interest is relocated from the operating environment to the ephemeral cloud environment and is replaced (in the operating environment) by a specialized transduction mechanism which manages interactions with the now-remote component. The cloud environment hosts a container for the component of interest which interacts over the network with the operating environment. This reduces the internal attack surface in the operating environment, effectively affording defenders a mechanism to design and regulate attack surfaces for high-value assets. This approach naturally supports defense-in-depth ideas through layering on both sides.
The authors compare in-nimbo sandboxes with environments (Polaris and Terra) whose concept approaches an idealized ephemeral computing environment. Cloud-based sandboxing can closely approximate ephemeral environments. The “ephemerality” is a consequence of the isolation of the cloud-hosted virtual computing resource from other (perhaps higher-value) resources through a combination of virtualization and separate infrastructure for storage, processing, and communication. Assuming the persistent state is relatively minimal (e.g., configuration information), it can be hosted anywhere, enabling the cloud environment to persist just long enough to perform a computation before its full state is discarded and environment refreshed.
Using Adobe Reader as an example component of interest, the team built an in-nimbo sandbox and compared results with the usual in-situ deployment of Reader. Within some constraints, they concluded that the in-nimbo sandboxes could usefully perform potentially vulnerable or malicious computations away from the environment being defended.
They conclude with a multi-criteria argument for why their sandbox improves security, building on both quantitative and qualitative scales because many security dimensions cannot yet be feasibly quantified. They suggest that structured criteria-based reasoning that is built on familiar security-focused risk calculus can lead to solid conclusions. They indicate that this work is a precursor to an extensive study, still being completed, that evaluates more than 70 examples of sandbox designs and implementations against a range of identified criteria. This study employs a range of technical, statistical, and content-analytic approaches to map the space of encapsulation techniques and outcomes.
Carnegie Mellon University's Science of Security Lablet (SOSL) is one of four Lablets funded by NSA that is addressing the hard problems of cybersecurity. The five problem areas are scalability and composability, policy-governed secure collaboration, predictive security metrics, resilient architectures, and human behavior. The in-nimbo sandbox supports efforts in the development of predictive metrics, as well as scalability and composability. The broad goal of SOSL is "to identify scientific principles that can lead to approaches to the development, evaluation, and evolution of secure systems at scale." More about the CMU Lablet and its research goals can be found on the CVS-VO web page at http://cps-vo.org/group/sos/lablet/cmu.
The original article is available at the ACM Digital Library as: Maass, Michael and Scherlis, William L. and Aldrich, Jonathan. “In-Nimbo Sandboxing.” Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. (HotSoS '14}, Raleigh, North Carolina, April, 2014, pp. 1:1, 1:12. ISBN: 978-1-4503-2907-1. doi:10.1145/2600176.2600177
URL: http://doi.acm.org/10.1145/2600176.2600177
(ID#: 15-5936)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computational Cybersecurity in Compromised Environments (C3E) Workshop |
The Special Cyber Operations Research and Engineering (SCORE) Subcommittee sponsored the 2014 Computational Cybersecurity in Compromised Environments (C3E) Workshop at the Georgia Tech Research Institute (GTRI) Conference Center from 20-22 October 2014. The research workshop brought together a diverse group of top academic, industry, and government experts to examine new ways of approaching the cybersecurity challenges facing the Nation and how to enable smart, real-time decision-making in Cyberspace through both “normal” complexity and persistent adversarial behavior.
Figure 1: C3E Participation by Organizational Category
This was the sixth in a series of annual C3E research workshops, drawing upon the efforts between 2009 and 2013 on adversarial behavior, the role of models and data, predictive analytics, and the need for understanding how best to employ human and machine-based decisions in the face of emerging cyber threats. Since its inception, C3E has had two overarching objectives: 1) development of ideas worth additional cyber research; and 2) the development of a Community of Interest (COI) around unique, analytic, and operational approaches to the persistent cyber security threat.
C3E 2014 continued to focus on the needs of the practitioner and leverage past C3E themes of predictive analytics, decision-making, consequences, and visualization. To enhance the prospects of deriving applicable and adoptable results from C3E track work, the selection of the track topics, Security by Default and Data Integrity, was based on recommendations from senior government officials.
To accomplish its objectives, C3E 2014 drew upon the approaches that have been used in C3E since the beginning: 1) Keynote Speakers who provided tailored and provocative talks on a wide range of topics related to cyber security; 2) a Discovery or Challenge Problem that attracted the analytic attention of participants prior to C3E; and especially 3) Track Work that involved dedicated, small group focus on Security by Default and Data Integrity.
The C3E 2014 Discovery Problem was a Metadata-based Malicious Cyber Discovery Problem. The first goal of the task was to invent and prototype approaches for identifying high interest, suspicious, and likely malicious behaviors from metadata that challenge the way we traditionally think about the cyber problem. The second goal was to introduce participants to the DHS Protected Repository for the Defense of Infrastructure against Cyber Threats (PREDICT) datasets for use on this and other future research problems. PREDICT is a rich repository of datasets that contains routing (BGP) data, naming (DNS) data, data application (net analyzer) data, infrastructure (census probe) data, and security data (botnet sinkhole data, dark data, etc.). There are hundreds of actual and simulated dataset categories in PREDICT that provide samples of cyber data for researchers. DHS has addressed and resolved the legal and ethical issues concerning PREDICT, and the researchers found the PREDICT datasets to be a valuable resource for the C3E 2014 Discovery Problem. Each of the five groups that worked on the Discovery Problem presented their findings: 1) An Online Behavior Modeling Based Approach to Metadata-Based Malicious Cyber Discovery; 2) Implementation Based Characterization of Network Flows; 3) APT Discovery Beyond Time and Space; 4) Genomics Inspired Cyber Discovery; and 5) C3E Malicious Cyber Discovery: Mapping Access Patterns. Several government organizations have expressed interest in follow-up discussions on a couple of the approaches.
The purpose of the Data Integrity Track was to engage a set of researchers from diverse backgrounds to elicit research themes that could improve both understanding of cyber data integrity issues and potential solutions that could be developed to mitigate them. Track participants addressed data integrity issues associated with finance and health/science, and captured relevant characteristics, as shown below.
Figure 2: Characteristics of Data Integrity
Track participants also identified potential solutions and research themes for addressing data integrity issues: 1) Diverse workflows and sensor paths; 2) Cyber insurance and regulations; and 3) Human-in-the-loop data integrity detection.
The Security by Default (SBD) track focused on addressing whether secure, “out of the box” systems can be created, e.g., systems that are secure when they are fielded. There is a perception among stakeholders that systems that are engineered and/or configured to be secure by default may be less functional, less flexible, and more difficult to use, explaining the market bias toward insecure initial designs and configurations. Track participants identified five areas of focus that might merit additional study: 1) The building code analogy—balancing the equities; 2) Architecture and design—expressing security problems understandably; 3) Architecture and design—delivering and assuring secure components and infrastructure; 4) Biological analogs—supporting adaptiveness and architectural dynamism; and 5) Usability and metrics—rethinking the “trade-off” of security and usability.
In preparation for the next C3E workshop, a core group of participants will meet to refine specific approaches, and, consistent with prior workshops, will identify at least two substantive track areas through discussions with senior government leaders.
C3E remains focused on cutting-edge technology and understanding how people interact with systems and networks. In evaluating the 2014 workshop, fully 92% of the participants were both interested in the areas discussed and believed that the other participants were contributors in the field. While C3E is often oriented around research, the workshops have begun to incorporate practical examples of how different government, scientific, and industry organizations are actually using advanced analysis and analytics in their daily business and creating a path to applications for the practitioner, thus providing real solutions to address cyber problems.
(ID#: 15-5935)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Improving Power Grid Cybersecurity |
Illinois Research Efforts Spark Improvements in Power Grid Cybersecurity.
The Information Trust Institute (ITI) at the University of Illinois Urbana Champaign has a broad research portfolio. In addition to its work as a Science of Security Lablet where it is contributing broadly to the development of security science in resiliency—a system's ability to maintain security properties during ongoing cyber attacks—ITI has been working on issues related to the electric power infrastructure and the development of a stronger, more resilient power grid.
The Trustworthy Cyber Infrastructure for the Power Grid (TCIPG) project is a partnership among Illinois, Dartmouth, Arizona State, and Washington State Universities, as well as government and industry. Formed to meet challenges related to the health of the underlying computing and communication network infrastructure at serious risk from malicious attacks on grid components and networks, and accidental causes, such as natural disasters, misconfiguration, or operator errors, they continually collaborate with the national laboratories and the utility sector to protect the U.S. power grid by significantly improving the way the power grid infrastructure is designed, making it more secure, resilient, and safe.
TCIPG comprises several dozen researchers, students, and staff who bring interdisciplinary expertise essential to the operation and public adoption of current and future grid systems. That expertise extends to power engineering; computer science and engineering; advanced communications and networking; smart grid markets and economics; and Science, Technology, Engineering, and Math (STEM) education.
Ten years ago, the electricity sector was largely “security-unaware.” More recently, and thanks in part to TCIPG, there has been broad adoption of security best practices. That transition came about from breakthrough research, national expert panels, and in writing key documents. However, because the threat landscape continuously evolves, resiliency in a dynamic environment is key and an area where continuous improvement is needed.
TCIPG Research in Smart Grid Resiliency
Countering threats to the nation’s cyber systems in critical infrastructure such as the power grid has become a major strategic objective and was identified as such in Homeland Security Presidential Directive 7. Smart grid technologies promise advances in efficiency, reliability, integration of renewable energy sources, customer involvement, and new markets. But to achieve those benefits, the grid must rely on a cyber-measurement and control infrastructure that includes components ranging from smart appliances at customer premises to automated generation control. Control systems and administrative systems no longer have an air gap; security between the two has become more complicated and complex.
TCIPG research has produced important results and innovative technologies in addressing that need and the complexity by focusing on the following areas:
Much of this work has been achieved because of the success of the experimental testbed.
Testbed Cross-Cutting Research
Experimental validation is critical for emerging research and technologies. The TCIPG testbed enables researchers to conduct, validate, and evolve cyber-physical research from fundamentals to prototype, and finally, transition to practice. It provides a combination of emulation, simulation, and real hardware to realize a large-scale, virtual environment that is measurable, repeatable, flexible, and adaptable to emerging technology while maintaining integration with legacy equipment. Its capabilities span the entire power grid: transmission, distribution & metering, distributed generation, and home automation and control – providing true end-to-end capabilities for the smart grid.
The cyber-physical testbed facility uses a mixture of commercial power system equipment and software, hardware and software simulation, and emulation to create a realistic representation of the smart grid. This representation can be used to experiment with next-generation technologies that span communications from generation to consumption and everything in between. In addition to offering a realistic environment, the testbed facility is instrumented with cutting-edge research and commercial tools to explore problems from multiple dimensions, tackling in-depth security analysis and testing, visualization and data mining, and federated resources, and developing novel techniques that integrate these systems in a composable way.
A parallel project funded by the State of Illinois, the Illinois Center for a Smarter Electric Grid (ICSEG), is a 5-year project to develop and operate a facility to provide services for the validation of information technology and control aspects of Smart Grid systems, including micro grids and distributed energy resources. The key objective of this project is to test and validate in a laboratory setting how new and more cost-effective Smart Grid technologies, tools, techniques, and system configurations can be used in trustworthy configurations to significantly improve those in common practice today. The laboratory is also a resource for Smart Grid equipment suppliers and integrators and electric utilities to allow validation of system designs before deployment.
Education and Outreach
In addition to basic research, TCIPG has addressed needs in education and outreach. Nationally, there is a shortage of professionals who can fill positions in the power sector. Skills required for smart grid engineers have changed dramatically. Graduates of the collaborating TCIPG universities are well-prepared to join the cyber-aware grid workforce as architects of the future grid, as practicing professionals, and as educators.
TCIPG has conducted short courses for practicing engineers and for DOE program managers. In addition to a biennial TCIPG Summer School for university students and researchers, utility and industry representatives, and government and regulatory personnel, TCIPG organizes a monthly webinar series featuring thought leaders in cyber security and resiliency in the electricity sector. In alignment with national STEM educational objectives, TCIPG conducts extensive STEM outreach to K-12 students and teachers. TCIPG has developed interactive, open-ended apps (iOS, Android, MinecraftEdu) for middle-school students, along with activity materials and teacher guides to facilitate integration of research, education, and knowledge transfer by linking researchers, educators, and students.
The electricity industry in the U.S. is made up of thousands of utilities, equipment and software vendors, consultants, and regulatory bodies. In both its NSF-funded and DOE/DHS-funded phases, TCIPG has actively developed extensive relationships with such entities and with other researchers in the sector, including joint research with several national laboratories.
The involvement of industry and other partners in TCIPG is vital to its success, and is facilitated by an extensive Industry Interaction Board (IIB) and a smaller External Advisory Board (EAB). The EAB, with which TCIPG interacts closely, includes representatives from the utility sector, system vendors, and regulatory bodies, in addition to DOE-OE and DHS S&T.
Partnerships & Impact
While university-led, TCIPG has always stressed real-world impact and industry partnerships.That is why TCIPG technologies have been adopted by the private sector.
Leadership
Director: Professor William H. Sanders, who is also Co-PI for the UIUC Science of Security Lablet
Personal web page: http://www.crhc.illinois.edu/Faculty/whs.html
TCIPG URL: http://tcipg.org/about-us
Recent publications arising from TCIPG’s work:
URL: http://tcipg.org/research/
(ID#: 15-5941)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Selection of Android Graphic Pattern Passwords |
Lablet partners study mobile passwords, human behavior
Android mobile phones come with four embedded access methods: a finger swipe, a pattern, a PIN, or a password, in ascending order of security. In the pattern method, the user is required to select a pattern by traversing a grid of 3x3 points. A pattern must contain at least four points, cannot use a point twice, and all points along a path must be sequential and connected, that is, no skipping of points. The pattern can be visible or cloaked. When a user enables such a feature, how do they trade off security with usability? And, are there visual cues that lead users to select one password over another and whether for usability or security? Researchers at the U.S. Naval Academy and Swarthmore, partners of the University of Maryland Science of Security Lablet, conducted a large user study of preferences for usability and security of the Android password pattern to provide insights into user perceptions that inform choice.
The study by Adam Aviv and Dane Fichter, “Understanding Visual Perceptions of Usability and Security of Android’s Graphical Password Pattern,” used a survey methodology that asked participants to select between pairs of patterns indicating either a security or usability preference. By carefully selecting password pairs to isolate a visual feature, visual perceptions of usability and security of different features were measured. They had a sample size of 384 users, self selected on Amazon Mechanical Turk. They found visual features that can be attributed to complexity indicated a stronger perception of security, while spatial features, such as shifts up/down or left/right were not strong indicators either for security or usability.
In their study, Aviv and Fichter selected pairs of patterns based on six visual features: Length (total number of contacts points used), Crosses (occur when the pattern doubles-back on itself by tracing over a previously contacted point), Non-Adjacent (The total number of non-adjacent swipes which occur when the pattern double-backs on itself by tracing over a previously contacted point), Knight-Moves (two spaces in one direction and then one space over in another direction), Height (amount the pattern is shifted towards the upper or lower contact points), and Side (amount the pattern is shifted towards the left or right contact points).
They asked users to select between two passwords, indicating a preference for one password in the pair that met a particular criterion, such as perceived security or usability, compared to the other password. By carefully selecting these password pairs, visual features of the passwords can be isolated and the impact of that feature on users’ perception of security and usability measured.
The researchers concluded that spatial features have little impact, but more visually striking features have a stronger impact, with the length of the pattern being the strongest indicator of preference. These results were extended and applied by constructing a predictive model with a broader set of features from related work, and the researchers found that the distance factor, the total length of all the lines in a pattern, is the strongest predictor of preference. These findings provide insight into users’ visual calculus when assessing a password, and the information may be used to develop new password systems or user selection tools, like password meters.
Moreover, Aviv and Fichter concluded that, with a good predictive model of user preference, their findings could be applied to a broader set of passwords, including those not used in the survey, and that this research could be expanded. For example, ranking data based on learned pairwise preferences is an active research area in machine learning, and the resulting rankings metric over all potential patterns in the space would be greatly beneficial to the community. It could enable new password selection procedures where users are helped in identifying a preferred usable password that also meets a security requirement.
The study is available at: http://www.usna.edu/Users/cs/aviv/papers/p286-aviv.pdf
Adam J. Aviv, Assistant Professor, Computer Science, USNA--PI
Email: aviv@usna.edu
Webpage: http://www.usna.edu/Users/cs/aviv/
(ID#: 15-5937)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
SoS and Resilience for Cyber-Physical Systems Project |
Nashville, TN
20 March 2015
On March 17 and 18, 2015, researchers from the four System Science of SecUrity and REsilience for Cyber-Physical Systems (SURE) project universities (Vanderbilt, Hawai’i, California-Berkeley, and MIT) met with members of NSA’s R2 Directorate to review their first six months of work. SURE is the NSA-funded project aimed at improving scientific understanding of resiliency, that is, robustness to reliability failures or faults and survivability against security failures and attacks in cyber-physical systems (CPS). The project addresses the question of how to design systems that are resilient despite significant decentralization of resources and decision-making.
Waseem Abbas and Xenofon Koutsoukos, Vanderbilt,
listen to comments about resilient sensor designs from
David Corman, National Science Foundation.
Initially looking at water distribution and surface traffic control architectures, air traffic control and satellite systems are added examples of the types of cyber-physical systems being examined. Xenofon Koutsoukos, Professor of Electrical Engineering and Computer Science in the Institute for Software Integrated Systems (ISIS) at Vanderbilt University, the Principle Investigator (PI) for SURE, indicated the use of these additional cyber-physical systems is to demonstrate how the SURE methodologies can apply to multiple systems. Main research thrusts include hierarchical coordination and control, science of decentralized security, reliable and practical reasoning about secure computation and communication, evaluation and experimentation, and education and outreach. The centerpiece is their testbed for evaluation of CPS security and resilience.
The development of the Resilient Cyber-Physical Systems (RCPS) Testbed supports evaluation and experimentation across the complete SURE research portfolio. This platform is being used to capture the physical, computational and communication infrastructure; describes the deployment, configuration of security measures and algorithms; and provides entry points for injecting various attack or failure events. "Red Team" vs. "Blue Team" simulation scenarios are being developed. After the active design phase—when both teams are working in parallel and in isolation—the simulation is executed with no external user interaction, potentially several times. The winner is decided based on scoring weights and rules that are captured by the infrastructure model.
The Resilient Cyber-Physical System Testbed hardware component.
In addition to the testbed, ten research projects on resiliency were presented. These presentations covered both behavioral and technical subjects including adversarial risk, active learning for malware detection, privacy modeling, actor networks, flow networks, control systems, software and software architecture, and information flow policies. The CPS-VO web site, its scope and format was also briefed. Details of these research presentations are presented in a companion newsletter article.
See: http://cps-vo.org/node/18610.
In addition to Professor Koutsoukos, participants included his Vanderbilt colleagues Gabor Karsai, Janos Sztipanovits, Peter Volgyesi, Yevgeniy Vorobeychik and Katie Dey. Other participants were Saurabh Amin, MIT; Dusko Pavlovic, U. of Hawai'i; and Larry Rohrbough, Claire Tomlin, and Roy Dong from California-Berkeley. Government representatives from the National Science Foundation, Nuclear Regulatory Commission, and Air Force Research Labs also attended, as well as the sponsoring agency, NSA.
Vanderbilt graduate students Pranav Srinivas Kumar (L) and William Emfinger
demonstrated the Resilient Cyber-Physical Systems Testbed.
(ID#: 15-5939)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Wyvern Programming Language Builds Secure Apps |
A wyvern is a mythical winged creature with a fire-breathing dragon's head, a poisonous bite, a scaly body, two legs, and a barbed tail. Deadly and stealthy by nature, wyverns are silent when flying and keep their shadows unseen. They hide in caves to protect their loot, and they are popular figures in heraldry and in electronic games. As the mythical wyvern protects its trove, so the Wyvern programming language is designed to help create secure programs to protect applications and data.
Led by Dr. Jonathan Aldrich, Institute for Software Research (ISR), researchers at the Carnegie Mellon University (CMU) Science of Security Lablet (SOSL), along with Dr. Alex Potanin and collaborators at the Victoria University of Wellington, have been developing Wyvern to build secure web and mobile applications. Wyvern is designed to help software engineers build those secure applications using several type-based, domain-specific languages (DSLs) within the same program. It is able to exploit knowledge of sublanguages (SQL, HTML, etc.) used in the program based on types and their context, which indicate the format and typing of the data.
Dr. Aldrich and his team recognized that the proliferation of programming languages used in developing web and mobile applications is inefficient and thwarts scalability and composability. Software development has come a long way, but the web and mobile arenas nonetheless struggle to cobble together a "...mishmash of artifacts written in different languages, file formats, and technologies." http://www.cs.cmu.edu/~aldrich/wyvern/spec-rationale.html
Constructing web pages often requires HTML for structure, CSS for design, JavaScript to handle user interaction, and SQL to access the database back-end. The diversity of languages and tools used to create an application increases the associated development time, cost, and security risks. It also creates openings for Cross-Site Scripting and SQL Injection attacks. Wyvern eliminates the need to use character strings as commands, as is the case, for instance, with SQL. By allowing character strings, malicious users with a rough knowledge of a system's structure could execute destructive commands such as DROP TABLE, or manipulate instituted access controls. Instead, Wyvern is a pure object-oriented language that is value-based, statically type-safe, and supports functional programming. It supports HTML, SQL, and other web languages through a concept of “Composable Type-Specific Languages (TSLs).”
Composable Type-Specific Languages are the equivalent of a "...skilled international negotiator who can smoothly switch between languages..," according to Dr. Aldrich. The system can discern which sublanguage is being used through context, much as “...a person would realize that a conversation about gourmet dining might include some French words and phrases.” Wyvern strives to provide flexible syntax, using an internal DSL strategy; static type-checking based on defined rules in Wyvern-internal DSLs; secure language and library constructs providing secure built-in datatypes and database access through an internal DSL; and high-level abstractions, wherein programmers will be able to define an application's architecture, to be enforced by the type system, and implemented by the compiler and runtime. Wyvern follows the principle that objects should only be accessible by invoking their methods. With Wyvern's use of TSLs, a type is invoked only when a literal appears in the context of the expected type, ensuring non-interference (Omar 2014, at: http://www.cs.cmu.edu/~aldrich/papers/ecoop14-tsls.pdf).
Wyvern is an ongoing project hosted at the open-source site GitHub. Interested potential users may explore the language at: https://github.com/wyvernlang/wyvern. Interest in Wyvern programming language has been growing in the security world. Gizmag reviews and describes Wyvern as “something of a meta-language,” and agrees that the web would be a much more secure place if not for vulnerabilities due to the common coding practice of “pasted-together strings of database commands” (Moss 2014, accessed at: http://www.gizmag.com/wyvern-multiple-programming-languages/33302/#comments). The CMU Lablet and Wyvern were also featured in a press release by SD Times, which mentions the integration of multiple languages, citing flexibility in terms of additional sublanguages, and easy-to-implement compilers. The article may be accessed at: http://sdtimes.com/wyvern-language-works-platforms-interchangeably/. Communications of the ACM (CACM) explain Wyvern as a host language that allows developers to import other languages for use on a project, but warns that Wyvern, as a meta-language, could be vulnerable to attack. The CACM article can be accessed at: http://cacm.acm.org/news/178649-new-nsa-funded-programming-language-could-closelong-standing-security-holes/fulltext.
The WYVERN project is part of the research being done by the Carnegie Mellon University Science of Security Lablet supported by NSA and other agencies to address hard problems in cybersecurity, including scalability and composability. Other hard problems being addressed include policy-governed secure collaboration, predictive security metrics, resilient architectures, and human behavior.
References and Publications
A description of the Wyvern Project is available on the CPS-VO web page at: http://cps-vo.org/node/15054. A succinct PowerPoint presentation about Wyvern and specific examples may be accessed at: http://www.cs.cmu.edu/~comar/GlobalDSL13-Wyvern.pdf. As of March 15, 2015, Wyvern is publically distributed on GIT HUB under a GPLv2 license. https://github.com/wyvernlang/wyvern.
The latest research work on Wyvern was presented at PLATEAU ’14 in Portland, Oregon. That paper is available on the ACM Digital Library as:
Darya Kurilova, Alex Potanin, Jonathan Aldrich; Wyvern: Impacting Software Security via Programming Language Design; PLATEAU '14 Proceedings of the 5th Workshop on Evaluation and Usability of Programming Languages and Tools; October 2014, Pages 57-58; doi:10.1145/2688204.2688216
Abstract: Breaches of software security affect millions of people, and therefore it is crucial to strive for more secure software systems. However, the effect of programming language design on software security is not easily measured or studied. In the absence of scientific insight, opinions range from those that claim that programming language design has no effect on security of the system, to those that believe that programming language design is the only way to provide "high-assurance software." In this paper, we discuss how programming language design can impact software security by looking at a specific example: the Wyvern programming language. We report on how the design of the Wyvern programming language leverages security principles, together with hypotheses about how usability impacts security, in order to prevent command injection attacks. Furthermore, we discuss what security principles we considered in Wyvern's design.
Keywords: command injection attacks, programming language, programming language design, security, security principles, usability, wyvern
URL: http://doi.acm.org/10.1145/2688204.2688216
An earlier work by the research group is also available at: Darya Kurilova, Cyrus Omar, Ligia Nistor, Benjamin Chung, Alex Potanin, Jonathan Aldrich; Type Specific Languages To Fight Injection Attacks; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 18; doi:10.1145/2600176.2600194
Abstract: Injection vulnerabilities have topped rankings of the most critical web application vulnerabilities for several years. They can occur anywhere where user input may be erroneously executed as code. The injected input is typically aimed at gaining unauthorized access to the system or to private information within it, corrupting the system's data, or disturbing system availability. Injection vulnerabilities are tedious and difficult to prevent.
Keywords: extensible languages; parsing; bidirectional typechecking; hygiene
URL: http://doi.acm.org/10.1145/2600176.2600194
Some other Publications Related to Wyvern:
Joseph Lee, Jonathan Aldrich, Troy Shaw, and Alex Potanin; A Theory of Tagged Objects; In Proceedings European Conference on Object-Oriented Programming (ECOOP), 2015.
http://ecs.victoria.ac.nz/foswiki/pub/Main/TechnicalReportSeries/ECSTR15-03.pdf
Cyrus Omar, Chenglong Wang, and Jonathan Aldrich; Composable and Hygienic Typed Syntax Macros; In Proceedings of the 30th Annual ACM Symposium on Applied Computing (SAC '15). 2015. Doi:10.1145/2695664.2695936
http://doi.acm.org/10.1145/2695664.2695936
Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich; Safely Composable Type-Specific Languages; In Proceedings, European Conference on Object-Oriented Programming, 2014.
http://www.cs.cmu.edu/~aldrich/papers/ecoop14-tsls.pdf
Jonathan Aldrich, Cyrus Omar, Alex Potanin, and Du Li; Language-Based Architectural Control; International Workshop on Aliasing, Capabilities, and Ownership (IWACO '14), 2014.
http://www.cs.cmu.edu/~aldrich/papers/iwaco2014-arch-control.pdf
Ligia Nistor, Darya Kurilova, Stephanie Balzer, Benjamin Chung, Alex Potanin, and Jonathan Aldrich; Wyvern: A Simple, Typed, and Pure Object-Oriented Language; Mechanisms for Specialization, Generalization, and Inheritance (MASPEGHI), 2013.
http://www.cs.cmu.edu/~aldrich/papers/maspeghi13.pdf
Cyrus Omar, Benjamin Chung, Darya Kurilova, Alex Potanin, and Jonathan Aldrich; Type-Directed, Whitespace-Delimited Parsing for Embedded DSLs; Globalization of Domain Specific Languages (GlobalDSL), 2013.
http://www.cs.cmu.edu/~aldrich/papers/globaldsl13.pdf
(ID#: 15-5938)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Upcoming Events |
Mark your calendars!
This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.
Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.
44CON London
44CON London is the UK's largest combined annual Security Conference and Training event. We will have a fully dedicated conference facility, including secure wi-fi with high bandwidth Internet access, catering, private bar and daily Gin O'Clock break. 44CON London will comprise of two main presentation tracks, two workshop tracks and a mixed presentation/workshop track over the two full days covering Technical topics. The Hidden track is run under the Chatham House Rule and we're not going to tell you about that.
Date: September 9 - 11
Location: London, United Kingdom
URL: http://44con.com/
New York Cyber Security Summit 2015
The Cyber Security Summit, an exclusive C-Suite conference series, connects senior level executives responsible for protecting their companies’ critical infrastructures with innovative solution providers and renowned information security experts. This educational and informational forum will focus on educating attendees on how to best protect highly vulnerable business applications and critical infrastructure. Attendees will have the opportunity to meet the nation’s leading solution providers and discover the latest products and services for enterprise cyber defense.
Date: September 18
Location: New York, NY
URL: http://cybersummitusa.com/2015-new-york-city/
Global Identity Summit
The Global Identity Summit focuses on identity management solutions for the corporate, defense and homeland security communities.
Date: September 21 - 24
Location: Tampa, Fl
URL: http://events.jspargo.com/id15/Public/Enter.aspx
DerbyCon 5.0
Welcome to DerbyCon 5.0 – “Unity”. This is the place where security professionals, hobbyists, and anyone interested in security come to hang out. DerbyCon V will be held September 23-27th, 2015 at the Hyatt Regency in downtown Louisville Kentucky. Training is held on Wednesday and Thursday (September 23rd and 24th) and the conference the Friday, Saturday, and Sunday (September 25th – 27th). DerbyCon 4.0 pulled in over 2,000 people with an amazing speaker lineup and a family-like feel. We continue to make the conference better each year and have a ton of new and exciting things planned for this year. Please excuse the website as it is currently under construction and planning for DerbyCon 5.0!
Date: September 23 - 27
Location: Louisville, Ky
URL: http://derbycon.com/
World Congress on Internet Security (WorldCIS-2015)
The World Congress on Internet Security (WorldCIS-2015) is Technical Co-sponsored by IEEE UK/RI Computer Chapter. The WorldCIS is an international refereed conference dedicated to the advancement of the theory and practical implementation of security on the Internet and Computer Networks. The inability to properly secure the Internet, computer networks, protecting the Internet against emerging threats and vulnerabilities, and sustaining privacy and trust has been a key focus of research. The WorldCIS aims to provide a highly professional and comparative academic research forum that promotes collaborative excellence between academia and industry.
Date: October 19 - 21
Location: Dublin, Ireland
URL: http://www.worldcis.org/
SOURCE Security Conference Seattle
In addition to our excellent line-up of keynotes and speakers, we have our usual selection of the little things that makes the SOURCE Conferences special. This year we will have speed networking, lightning talks, a career development panel, and an excellent networking reception - all designed in a way that ties the event together.
Date: October 14 - 15
Location: Seattle, Wa
URL: http://www.sourceconference.com/seattle-2015-main
CyberMaryland 2015
Maryland is recognized as a cybersecurity leader - nationally and internationally. The state has developed cybersecurity experts, education and training programs, technology, products, systems and infrastructure. With over 10 million cyber hacks a day resulting in an annual worldwide cost of over $100 billion, the United States is at risk.
Date: October 28 - 29
Location: Baltimore, Md
URL: https://www.fbcinc.com/e/cybermdconference/default.aspx
(ID#:15-6153)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.