Publications of Interest

 

 
SoS Logo

Publications of Interest

 

The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:15-6152)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

Attack Graphs and Privacy, 2014

 
SoS Logo

Attack Graphs and Privacy

2014


Security analysts use attack graphs for detection, defense, and forensics. An attack graph is defined as a representation of all paths through a system that end in a state where an intruder has successfully breached the system. Privacy needs add a complicating element to the trace. The research cited here looks at various aspects of attack graphs as they relate to privacy. All were presented in 2014.


Kiremire, A.R.; Brust, M.R.; Phoha, V.V., "Topology-Dependent Performance of Attack Graph Reconstruction in PPM-Based IP Traceback," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, vol., no., pp. 363, 370, 10-13 Jan. 2014. doi:10.1109/CCNC.2014.6866596
Abstract: A variety of schemes based on the technique of Probabilistic Packet Marking (PPM) have been proposed to identify Distributed Denial of Service (DDoS) attack traffic sources by IP traceback. These PPM-based schemes provide a way to reconstruct the attack graph - the network path taken by the attack traffic - hence identifying its sources. Despite the large amount of research in this area, the influence of the underlying topology on the performance of PPM-based schemes remains an open issue. In this paper, we identify three network-dependent factors that affect different PPM-based schemes uniquely giving rise to a variation in and discrepancy between scheme performance from one network to another. Using simulation, we also show the collective effect of these factors on the performance of selected schemes in an extensive set of 60 Internet-like networks. We find that scheme performance is dependent on the network on which it is implemented. We show how each of these factors contributes to a discrepancy in scheme performance in large scale networks. This discrepancy is exhibited independent of similarities or differences in the underlying models of the networks.
Keywords: computer network security; graph theory; telecommunication network routing; DDoS attack traffic sources; Internet-like networks; PPM-based IP traceback; PPM-based schemes; attack graph reconstruction; distributed denial of service attack traffic sources; large scale networks; probabilistic packet marking; topology-dependent performance; Computer crime; Convergence; IP networks; Network topology; Privacy; Topology (ID#: 15-5986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866596&isnumber=6866537 

 

Datta, E.; Goyal, N., "Security Attack Mitigation Framework for the Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual, vol., no., pp. 1, 6, 27-30 Jan. 2014. doi:10.1109/RAMS.2014.6798457
Abstract: Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow them to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.
Keywords: Markov processes; cloud computing; security of data; virtualisation; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security; Silicon; Virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration (ID#: 15-5987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433

 

Sarkar, A.; Kohler, S.; Riddle, S.; Ludaescher, B.; Bishop, M., "Insider Attack Identification and Prevention Using a Declarative Approach," Security and Privacy Workshops (SPW), 2014 IEEE, vol., no., pp. 265, 276, 17-18 May 2014. doi:10.1109/SPW.2014.41
Abstract: A process is a collection of steps, carried out using data, by either human or automated agents, to achieve a specific goal. The agents in our process are insiders, they have access to different data and annotations on data moving in between the process steps. At various points in a process, they can carry out attacks on privacy and security of the process through their interactions with different data and annotations, via the steps which they control. These attacks are sometimes difficult to identify as the rogue steps are hidden among the majority of the usual non-malicious steps of the process. We define process models and attack models as data flow based directed graphs. An attack A is successful on a process P if there is a mapping relation from A to P that satisfies a number of conditions. These conditions encode the idea that an attack model needs to have a corresponding similarity match in the process model to be successful. We propose a declarative approach to vulnerability analysis. We encode the match conditions using a set of logic rules that define what a valid attack is. Then we implement an approach to generate all possible ways in which agents can carry out a valid attack A on a process P, thus informing the process modeler of vulnerabilities in P. The agents, in addition to acting by themselves, can also collude to carry out an attack. Once A is found to be successful against P, we automatically identify improvement opportunities in P and exploit them, eliminating ways in which A can be carried out against it. The identification uses information about which steps in P are most heavily attacked, and try to find improvement opportunities in them first, before moving onto the lesser attacked ones. We then evaluate the improved P to check if our improvement is successful. This cycle of process improvement and evaluation iterates until A is completely thwarted in all possible ways.
Keywords: computer crime; cryptography; data flow graphs; data privacy; directed graphs; logic programming; attack model; data flow based directed graphs; declarative approach; improvement opportunities; insider attack identification; insider attack prevention; logic rules; mapping relation; nonmalicious steps; privacy; process models; security; similarity match; vulnerability analysis; Data models; Diamonds; Impedance matching; Nominations and elections; Process control; Robustness; Security; Declarative Programming; Process Modeling; Vulnerability Analysis (ID#: 15-5988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957311&isnumber=6957265

 

Peipei Yi; Zhe Fan; Shuxiang Yin, "Privacy-Preserving Reachability Query Services for Sparse Graphs," Data Engineering Workshops (ICDEW), 2014 IEEE 30th International Conference on, vol., no., pp. 32, 35, March 31 2014-April 4 2014. doi:10.1109/ICDEW.2014.6818298
Abstract: This paper studies privacy-preserving query services for reachability queries under the paradigm of data outsourcing. Specifically, graph data have been outsourced to a third-party service provider (SP), query clients submit their queries to the SP, and the SP returns the query answers. However, SP may not always be trustworthy. Therefore, this paper considers protecting the structural information of the graph data and the query answers from the SP. This paper proposes simple yet optimized privacy-preserving 2-hop labeling. In particular, this paper proposes that the encrypted intermediate results of encrypted query evaluation are indistinguishable. The proposed technique is secure under chosen plaintext attack. We perform an experimental study on the effectiveness of the proposed techniques on both real-world and synthetic datasets.
Keywords: cryptography; data privacy; graph theory; query processing; data outsourcing; optimized privacy-preserving 2-hop labeling; plaintext attack; privacy-preserving reachability query services; query clients; sparse graphs; structural information; third-party service provider; Bipartite graph; Communication networks; Cryptography; Educational institutions; Labeling; Privacy; Query processing (ID#: 15-5989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818298&isnumber=6816871

 

Young, A.L.; Yung, M., "The Drunk Motorcyclist Protocol for Anonymous Communication," Communications and Network Security (CNS), 2014 IEEE Conference on, vol., no., pp. 157, 165, 29-31 Oct. 2014. doi:10.1109/CNS.2014.6997482
Abstract: The buses protocol is designed to provide provably anonymous communication on a connected graph. Figuratively speaking, a bus is a single unit of transport containing multiple seats. Each seat carries a ciphertext from a sender to a receiver. The buses approach aims to conceal traffic patterns by having buses constantly travel along fixed routes and is a step forward in concealing traffic compared to other anonymous communication protocols. Therefore, in this day in which Internet privacy is crucial it deserves further investigation. Here, we cryptanalyze the reduced-seat Buses protocol and we also present distinguishing attacks against the related Taxis protocol as well as P5. These attacks highlight the need to employ cryptosystems with key-privacy in such protocols. We then show that anonymity is not formally proven in the buses protocols. These findings motivate the need for a new provably secure connectionless anonymous messaging protocol. We present what we call the drunk motorcyclist (DM) protocol for anonymous messaging that overcomes these issues. We define the DM protocol, show a construction for it, and then prove that anonymity and confidentiality hold under Decision Diffie-Hellman (DDH) against global active adversaries. Our protocol demonstrates the new principle of flooding a complete graph or an expander graph with randomly walking ciphertexts that travel until their time-to-live values expire. This principle also exhibits fault-tolerance properties.
Keywords: Internet; computer network security; cryptographic protocols; electronic messaging; motorcycles; telecommunication traffic; DDH; DM protocol; Decision Diffie-Hellman; Internet privacy; Taxis protocol; anonymous communication protocol; ciphertext; complete graph; cryptosystem; drunk motorcyclist protocol; expander graph; fault tolerance properties; key privacy; provably secure connectionless anonymous messaging protocol; reduced-seat bus protocol cryptanalyzation; time-to-live values; traffic concealment pattern; Encryption; Generators; Protocols; Public key; Receivers (ID#:15-5990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997482&isnumber=6997445

 

Xin Hu; Ting Wang; Stoecklin, M.P.; Schales, D.L.; Jiyong Jang; Sailer, R., "Asset Risk Scoring in Enterprise Network with Mutually Reinforced Reputation Propagation," Security and Privacy Workshops (SPW), 2014 IEEE, vol., no., pp. 61, 64, 17-18 May 2014. doi:10.1109/SPW.2014.18
Abstract: Cyber security attacks are becoming ever more frequent and sophisticated. Enterprises often deploy several security protection mechanisms, such as anti-virus software, intrusion detection prevention systems, and firewalls, to protect their critical assets against emerging threats. Unfortunately, these protection systems are typically "noisy", e.g., regularly generating thousands of alerts every day. Plagued by false positives and irrelevant events, it is often neither practical nor cost-effective to analyze and respond to every single alert. The main challenge faced by enterprises is to extract important information from the plethora of alerts and to infer potential risks to their critical assets. A better understanding of risks will facilitate effective resource allocation and prioritization of further investigation. In this paper, we present MUSE, a system that analyzes a large number of alerts and derives risk scores by correlating diverse entities in an enterprise network. Instead of considering a risk as an isolated and static property, MUSE models the dynamics of a risk based on the mutual reinforcement principle. We evaluate MUSE with real-world network traces and alerts from a large enterprise network, and demonstrate its efficacy in risk assessment and flexibility in incorporating a wide variety of data sets.
Keywords: business data processing; firewalls; invasive software; risk analysis; MUSE; antivirus software; asset risk scoring; cyber security attacks; enterprise network; firewalls; intrusion detection-prevention systems; mutually reinforced reputation propagation; risk assessment; security protection mechanisms; Belief propagation; Bipartite graph; Data mining; Intrusion detection; Malware; Servers; Risk Scoring; mutually reinforced principles; reputation propagation (ID#: 15-5991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957286&isnumber=6957265

 

Shan Chang; Hongzi Zhu; Mianxiong Dong; Ota, K.; Xiaoqiang Liu; Guangtao Xue; Xuemin Shen, "BusCast: Flexible and Privacy Preserving Message Delivery Using Urban Buses," Parallel and Distributed Systems (ICPADS), 2014 20th IEEE International Conference on, vol., no., pp. 502, 509, 16-19 Dec. 2014. doi:10.1109/PADSW.2014.7097847
Abstract: With the popularity of intelligent mobile devices, enormous urban information has been generated and required by the public. In response, ShanghaiGrid (SG) aims to providing abundant information services to the public. With fixed schedule and urban-wide coverage, an appealing service in SG is to provide free message delivery service to the public using buses, which allows mobile device users to send messages to locations of interest via buses. The main challenge in realizing this service is to provide efficient routing scheme with privacy preservation under highly dynamic urban traffic condition. In this paper, we present an innovative scheme BusCast to tackle this problem. In BusCast, buses can pick up and forward personal messages to their destination locations in a store-carry-forward fashion. For each message, BusCast conservatively associates a routing graph rather than a fixed routing path with the message in order to adapt the dynamic of urban traffic. Meanwhile, the privacy information about the user and the message destination is concealed from both intermediate relay buses and outside adversaries. Both rigorous privacy analysis and extensive trace-driven simulations demonstrate the efficacy of BusCast scheme.
Keywords: data privacy; traffic engineering computing; transportation; BusCast scheme; ShanghaiGrid; information service; intelligent mobile device; message destination; privacy analysis; privacy information; privacy preserving message delivery; routing graph; routing scheme; trace-driven simulation; urban bus; urban traffic condition; Bismuth; Delays; Relays; Routing; anonymous communication; backward unlinkability; message delivery; traffic analysis attacks; vehicular networks (ID#: 15-5992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097847&isnumber=7097773

 

Maag, M.L.; Denoyer, L.; Gallinari, P., "Graph Anonymization Using Machine Learning," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, vol., no., pp. 1111, 1118, 13-16 May 2014. doi:10.1109/AINA.2014.20
Abstract: Data privacy is a major problem that has to be considered before releasing datasets to the public or even to a partner company that would compute statistics or make a deep analysis of these data. This is insured by performing data anonymization as required by legislation. In this context, many different anonymization techniques have been proposed in the literature. These methods are usually specific to a particular de-anonymization procedure--or attack--one wants to avoid, and to a particular known set of characteristics that have to be preserved after the anonymization. They are difficult to use in a general context where attacks can be of different types, and where measures are not known to the anonymizer. The paper proposes a novel approach for automatically finding an anonymization procedure given a set of possible attacks and a set of measures to preserve. The approach is generic and based on machine learning techniques. It allows us to learn directly an anonymization function from a set of training data so as to optimize a trade off between privacy risk and utility loss. The algorithm thus allows one to get a good anonymization procedure for any kind of attacks, and any characteristic in a given set. Experiments made on two datasets show the effectiveness and the genericity of the approach.
Keywords: data privacy; graph theory; learning (artificial intelligence); risk management; data anonymization; de-anonymization procedure; graph anonymization; machine learning; privacy risk; training data; utility loss; Context; Data privacy; Loss measurement; Machine learning algorithms; Noise; Privacy; Social network services; Graph Anonymization; Machine Learning; Privacy (ID#:15-5993)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838788&isnumber=6838626
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
 


Big Data Security Metrics, 2014

 

 
SoS Logo

Big Data Security Metrics

2014


Measurement is a hard problem in the Science of Security. When applied to Big Data, the problems of measurement in security systems are compounded. The works cited here address these problems and were presented in 2014. 


Kotenko, I.; Novikova, E., "Visualization of Security Metrics for Cyber Situation Awareness," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol., no., pp. 506 , 513, 8-12 Sept. 2014. doi:10.1109/ARES.2014.75
Abstract: One of the important direction of research in situational awareness is implementation of visual analytics techniques which can be efficiently applied when working with big security data in critical operational domains. The paper considers a visual analytics technique for displaying a set of security metrics used to assess overall network security status and evaluate the efficiency of protection mechanisms. The technique can assist in solving such security tasks which are important for security information and event management (SIEM) systems. The approach suggested is suitable for displaying security metrics of large networks and support historical analysis of the data. To demonstrate and evaluate the usefulness of the proposed technique we implemented a use case corresponding to the Olympic Games scenario.
Keywords: Big Data; computer network security; data analysis; data visualisation; Olympic Games scenario; SIEM systems; big data security; cyber situation awareness; network security status; security information and event management systems; security metric visualization; visual analytics technique; Abstracts; Availability; Layout; Measurement; Security; Visualization; cyber situation awareness; high level metrics visualization; network security level assessment; security information visualization (ID#: 15-5776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980325&isnumber=6980232 

 

Vaarandi, R.; Pihelgas, M., "Using Security Logs for Collecting and Reporting Technical Security Metrics," Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 294, 299, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.53
Abstract: During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Keywords: Big Data; computer network security; big data; log analysis methods; log analysis techniques; open source technology; security logs; technical security metric collection; technical security metric reporting; Correlation; Internet; Measurement; Monitoring; Peer-to-peer computing; Security; Workstations; security log analysis; security metrics (ID#: 15-5777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956774&isnumber=6956719 

 

Jiang, F.; Luo, D., "A New Coupled Metric Learning for Real-time Anomalies Detection with High-Frequency Field Programmable Gate Arrays," Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, vol., no., pp. 1254, 1261, 14-14 Dec. 2014. doi:10.1109/ICDMW.2014.203
Abstract: Billions of internet end-users and device to device connections contribute to the significant data growth in recent years, large scale, unstructured, heterogeneous data and the corresponding complexity present challenges to the conventional real-time online fraud detection system security. With the advent of big data era, it is expected the data analytic techniques to be much faster and more efficient than ever before. Moreover, one of the challenges with many modern algorithms is that they run too slowly in software to have any practical value. This paper proposes a Field Programmable Gate Array (FPGA) -based intrusion detection system (IDS), driven by a new coupled metric learning to discover the inter- and intra-coupling relationships against the growth of data volumes and item relationship to provide a new approach for efficient anomaly detections. This work is experimented on our previously published NetFlow-based IDS dataset, which is further processed into the categorical data for coupled metric learning purpose. The overall performance of the new hardware system has been further compared with the presence of conventional Bayesian classifier and Support Vector Machines classifier. The experimental results show the very promising performance by considering the coupled metric learning scheme in the FPGA implementation. The false alarm rate is successfully reduced down to 5% while the high detection rate (=99.9%) is maintained.
Keywords: Internet; data analysis; field programmable gate arrays; security of data; support vector machines; Bayesian classifier; FPGA-based intrusion detection system; Internet end-users; NetFlow-based IDS dataset; data analytic techniques; device to device connections; false alarm rate; high-frequency field programmable gate arrays; metric learning; real-time anomalies detection; real-time online fraud detection system security; support vector machines classifier; Field programmable gate arrays; Intrusion detection; Measurement; Neural networks; Real-time systems; Software; Vectors; Metric Learning; Field Programmable Gate Arrays; Netflow; Intrusion Detection Systems (ID#: 15-5778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022747&isnumber=7022545

 

Okuno, S.; Asai, H.; Yamana, H., "A Challenge of Authorship Identification for Ten-Thousand-Scale Microblog Users," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 52, 54, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004491
Abstract: Internet security issues require authorship identification for all kinds of internet contents; however, authorship identification for microblog users is much harder than other documents because microblog texts are too short. Moreover, when the number of candidates becomes large, i.e., big data, it will take long time to identify. Our proposed method solves these problems. The experimental results show that our method successfully identifies the authorship with 53.2% of precision out of 10,000 microblog users in the almost half execution time of previous method.
Keywords: Big Data; security of data; social networking (online); Internet security issues; authorship identification; big data; microblog texts; ten-thousand-scale microblog users; Big data; Blogs; Computers; Distance measurement; Internet; Security; Training; Twitter; authorship attribution; authorship detection; authorship identification; microblog (ID#: 15-5779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004491&isnumber=7004197

 

Yu Liu; Jianwei Niu; Lianjun Yang; Lei Shu, "eBPlatform: An IoT-based System for NCD Patients Homecare in China," Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 2448, 2453, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037175
Abstract: The number of Non-communicable disease (NCD) patients in China is growing rapidly, which is far beyond the capacity of the national health and social security system. Community health stations do not have enough doctors to take care of their patients in traditional ways. In order to establish a bridge between doctors and patients, we propose eBPlatform, which is an information system based on the Internet of Things (IoT) technology for homecare of the NCD patients. The eBox is a sensor which can be deployed in the patient's home for blood pressure measurement, blood sugar measurement and ECG signals collection. Some services are running on the remote server, which can receive the samples, filter and analyze the ECG signals. The uploaded data will be pushed to a web portal, with which doctors provide treatments online. The system requirements, design and implementation of hardware and software are discussed respectively. Finally, we investigate a case study with 50 NCD patients for half a year in Beijing. The results show that eBPlatform can increase the efficiency of the doctor and make a big progress to eliminate the numerical imbalance between community medical practitioners and NCD patients.
Keywords: Internet of Things; blood pressure measurement; diseases; electrocardiography; filtering theory; health care; medical information systems; medical signal processing; portals; signal sampling; Beijing; China; ECG signal analysis; ECG signal collection; ECG signal filtering; IoT-based system; NCD patient homecare; Web portal; blood pressure measurement; blood sugar measurement; community health stations; community medical practitioners; data upload; eBPlatform; eBox; hardware design; hardware implementation; information system; national health; noncommunicable disease patients; numerical imbalance elimination; online treatment; patient care; patient home; remote server; social security system; software design; software implementation; system requirements; Biomedical monitoring; Biosensors; Blood pressure; Electrocardiography; Medical services; Pressure measurement; Servers; IoT application; eHealth; patients homecare; sensor network (ID#: 15-5780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037175&isnumber=7036769 

 

Gao Hui; Niu Haibo; Luo Wei, "Internet Information Source Discovery Based on Multi-Seeds Cocitation," Security, Pattern Analysis, and Cybernetics (SPAC), 2014 International Conference on, vol., no., pp. 368, 371, 18-19 Oct. 2014. doi:10.1109/SPAC.2014.6982717
Abstract: The technology of Internet information source discovery on specific topic is the groundwork of information acquisition in current big data era. This paper presents a multi-seeds cocitation algorithm to find new Internet information sources. The proposed algorithm is based on cocitation, but what difference with the traditional algorithms is that we use multiple websites on specific topic as input seeds. Then we induce Combined Cocitation Degree(CCD) to measure the relevancy of newly found websites, which is that the new websites have higher combined cocitation degree and are more topic related. Finally a websites collection of the biggest CCD is referred to as the new Internet information sources on the specific topic. The experiments show that the proposed method outperforms traditional algorithms in the scenarios we tested.
Keywords: Big Data; Internet; Web sites; citation analysis; data mining; CCD; Internet information source discovery; Web sites; combined cocitation degree; information acquisition; multiseeds cocitation; relevancy measurement; Algorithm design and analysis; Big data; Charge coupled devices; Google; Internet; Noise; Web pages; big data; cocitation; information source; related website (ID#: 15-5781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982717&isnumber=6982642

 

Si-Yuan Jing; Jin Yang; Kun She, "A Parallel Method for Rough Entropy Computation Using MapReduce," Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, vol., no., pp. 707, 710, 15-16 Nov. 2014. doi:10.1109/CIS.2014.41
Abstract: Rough set theory has been proven to be a successful computational intelligence tool. Rough entropy is a basic concept in rough set theory and it is usually used to measure the roughness of information set. Existing algorithms can only deal with small data set. Therefore, this paper proposes a method for parallel computation of entropy using MapReduce, which is hot in big data mining. Moreover, corresponding algorithm is also put forward to handle big data set. Experimental results show that the proposed parallel method is effective.
Keywords: Big Data; data mining; entropy; mathematics computing; parallel programming; rough set theory; MapReduce; big data mining; big data set handling; computational intelligence tool; information set roughness measurement; parallel computation method; rough entropy computation; rough set theory; Big data; Clustering algorithms; Computers; Data mining; Entropy; Information entropy; Set theory; Data Mining; Entropy; Hadoop; MapReduce; Rough set theory (ID#: 15-5782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016989&isnumber=7016831 

 

Agrawal, R.; Imran, A.; Seay, C.; Walker, J., "A Layer Based Architecture for Provenance in Big Data," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 1, 7, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004468
Abstract: Big data is a new technology wave that makes the world awash in data. Various organizations accumulate data that are difficult to exploit. Government databases, social media, healthcare databases etc. are the examples of the big data. Big data covers absorbing and analyzing huge amount of data that may have originated or processed outside of the organization. Data provenance can be defined as origin and process of data. It carries significant information of a system. It can be useful for debugging, auditing, measuring performance and trust in data. Data provenance in big data is relatively unexplored topic. It is necessary to appropriately track the creation and collection process of the data to provide context and reproducibility. In this paper, we propose an intuitive layer based architecture of data provenance and visualization. In addition, we show a complete workflow of tracking provenance information of big data.
Keywords: Big Data; data visualisation; software architecture; auditing; data analysis; data origin; data processing; data provenance; data trust; data visualization; debugging; government databases; healthcare databases; layer based architecture; performance measurement; social media; system information; Big data; Computer architecture; Data models; Data visualization; Databases; Educational institutions; Security; Big data; Provenance; Query; Visualization (ID#: 15-5783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004468&isnumber=7004197

 

Kiss, I.; Genge, B.; Haller, P.; Sebestyen, G., "Data Clustering-Based Anomaly Detection in Industrial Control Systems," Intelligent Computer Communication and Processing (ICCP), 2014 IEEE International Conference on, vol., no., pp. 275, 281, 4-6 Sept. 2014. doi:10.1109/ICCP.2014.6937009
Abstract: Modern Networked Critical Infrastructures (NCI), involving cyber and physical systems, are exposed to intelligent cyber attacks targeting the stable operation of these systems. In order to ensure anomaly awareness, the observed data can be used in accordance with data mining techniques to develop Intrusion Detection Systems (IDS) or Anomaly Detection Systems (ADS). There is an increase in the volume of sensor data generated by both cyber and physical sensors, so there is a need to apply Big Data technologies for real-time analysis of large data sets. In this paper, we propose a clustering based approach for detecting cyber attacks that cause anomalies in NCI. Various clustering techniques are explored to choose the most suitable for clustering the time-series data features, thus classifying the states and potential cyber attacks to the physical system. The Hadoop implementation of MapReduce paradigm is used to provide a suitable processing environment for large datasets. A case study on a NCI consisting of multiple gas compressor stations is presented.
Keywords: Big Data; control engineering computing; critical infrastructures; data mining; industrial control; pattern clustering; real-time systems; security of data; ADS; Big Data technology; Hadoop implementation; IDS; MapReduce paradigm; NCI; anomaly awareness; anomaly detection systems; clustering techniques; cyber and physical systems; cyber attack detection; cyber sensor; data clustering-based anomaly detection; data mining techniques; industrial control systems; intelligent cyber attacks; intrusion detection systems; large data sets; modern networked critical infrastructures; multiple gas compressor stations; physical sensor; real-time analysis; sensor data; time-series data feature; Big data; Clustering algorithms; Data mining; Density measurement; Security; Temperature measurement; Vectors; anomaly detection; big data; clustering; cyber-physical security; intrusion detection (ID#: 15-5784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6937009&isnumber=6936959

 

Singhal, Rekha; Nambiar, Manoj; Sukhwani, Harish; Trivedi, Kishor, "Performability Comparison of Lustre and HDFS for MR Applications," Software Reliability Engineering Workshops (ISSREW), 2014 IEEE International Symposium on, vol., no., pp. 51, 51, 3-6 Nov. 2014. doi:10.1109/ISSREW.2014.115
Abstract: With its simple principles to achieve parallelism and fault tolerance, the Map-reduce framework has captured wide attention, from traditional high performance computing to marketing organizations. The most popular open source implementation of this framework is Hadoop. Today, the Hadoop stack comprises of various software components including the Hadoop Distributed File System (HDFS), the distributed storage layer amongst others such as GPFS and WASB. The traditional high performance computing has always been at the forefront of developing and deploying cutting edge technology and solutions such as Lustre, a Parallel IO file systems, to meet its ever growing need. To support new and upcoming use cases, there is a focus on tighter integration of Hadoop with existing HPC stacks. In this paper, we share our work on one such integration by analyzing an FSI workload built using map reduce framework and evaluating the performance and reliability of the application on an integrated stack with Hadoop and Lustre through Hadoop extensions such as Hadoop Adapter for Lustre (HAL) and HPC Adapter for MapReduce (HAM) developed by Intel, while comparing the performance against the Hadoop Distributed File System (HDFS). We also carried out perform ability analysis of both the systems, where HDFS ensures reliability using replication factor and Lustre does not replicate any data but ensures reliability by having multiple OSSs connecting to multiple OSTs. The environment used for this evaluation is a 16 nodes HDDP cluster hosted in the Intel Big Data Lab in Swindon (UK). The cluster was divided into two clusters. One 8 node cluster was set up with CDH 5.0.2 and HDFS and another 8 node was set up with CDH 5.0.2 connected to Lustre through Intel HAL. We use Intel Enteprise Edition for Lustre 2.0 for the experiment based on Lustre 2.5. The Lustre setup includes 1 Meta Data Server (MDS) with 1 Meta Data Target (MDT) and 1 Management Target (MGT) and 4 Object Storage Servers (OSSs) with - 6 Object Storage Targets (OSTs). Both the systems were evaluated on performance metric 'average query response time' for FSI workload. The data is generated based on FSI application schema while MR jobs are written for few functionalities/queries of the FSI application which are used for the evaluation exercise. Apart from single query execution, both the systems were evaluated for concurrent workload as well. Tests were run for application data volumes varying from 100 GB to 7 TB. From our experiments, with appropriate tuning of Lustre file system, we observe that MR applications on Lustre platform perform at least twice better than that on HDFS. We conducted perform ability analysis of both the systems using Markov Reward Model. We propose linear extrapolation for estimating average query execution time for states exhibiting failure for some nodes and calculated the perform ability with reward for working states as the average query execution time. We assume that the time to failure, detect failure, and repair of both compute nodes as well data nodes are exponentially distributed, and took reasonable parameter values for the same. From our analysis, Expected query execution time for MR applications on Lustre file platform is at least half that of the applications on HDFS platform.
Keywords: Artificial neural networks; Disk drives; File systems; Measurement; Random access memory; Security; Switches; HDFS; LUSTRE; MR applications; Performability; Performance; Query Execution Time (ID#: 15-5785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983800&isnumber=6983760
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Coding Theory and Security, 2014, Part 1

 

 
SoS Logo

Coding Theory and Security, 2014

Part 1


Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014. 


Matsumoto, R., "Coding Theoretic Study of Secure Network Coding and Quantum Secret Sharing," Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 335, 337, 26-29 Oct. 2014. doi: (not provided)
Abstract: The common goal of (classical) secure network coding and quantum secret sharing is to encode secret so that an adversary has as little information of the secret as possible. Both can be described by a nested pair of classical linear codes, while the strategies available to the adversary are different. The security properties of both schemes are closely related to combinatorial properties of the underlying linear codes. We survey connections among them.
Keywords: linear codes; network coding; quantum cryptography; telecommunication security; coding theoretic study; combinatorial properties; linear codes; quantum secret sharing; secure network coding; security properties; Australia; Cryptography; Hamming weight; Linear codes; Network coding; Quantum mechanics (ID#: 15-4842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979860&isnumber=6979787 

 

Hibshi, H.; Breaux, T.; Riaz, M.; Williams, L., "Towards a Framework to Measure Security Expertise in Requirements Analysis," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, vol., no., pp. 13, 18, 25-25 Aug. 2014. doi:10.1109/ESPRE.2014.6890522
Abstract: Research shows that commonly accepted security requirements are not generally applied in practice. Instead of relying on requirements checklists, security experts rely on their expertise and background knowledge to identify security vulnerabilities. To understand the gap between available checklists and practice, we conducted a series of interviews to encode the decision-making process of security experts and novices during security requirements analysis. Participants were asked to analyze two types of artifacts: source code, and network diagrams for vulnerabilities and to apply a requirements checklist to mitigate some of those vulnerabilities. We framed our study using Situation Awareness-a cognitive theory from psychology-to elicit responses that we later analyzed using coding theory and grounded analysis. We report our preliminary results of analyzing two interviews that reveal possible decision-making patterns that could characterize how analysts perceive, comprehend and project future threats which leads them to decide upon requirements and their specifications, in addition, to how experts use assumptions to overcome ambiguity in specifications. Our goal is to build a model that researchers can use to evaluate their security requirements methods against how experts transition through different situation awareness levels in their decision-making process.
Keywords: decision making; formal specification; security of data; source code (software); coding theory; cognitive theory; decision-making patterns; decision-making process; grounded analysis; network diagrams; requirements checklist; security expertise; security experts; security requirements analysis; security vulnerabilities; situation awareness; source code; specifications ambiguity; Decision making; Encoding; Firewalls (computing); Interviews; Software; Uncertainty; Security; decision-making; patterns; requirements analysis; situation awareness (ID#: 15-4843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890522&isnumber=6890516 

 

Shuiyin Liu; Yi Hong; Viterbo, E., "On Measures of Information Theoretic Security," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 309, 310, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970843
Abstract: While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
Keywords: cryptography; information theory; Shannon perfect secrecy; computational security; ideal secrecy; information theoretic cryptosystem; information theoretic security; Australia; Cryptography; Entropy; Information theory; Probability; Vectors
(ID#: 15-4844)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970843&isnumber=6970773 

 

Tao Fang; Min Li, "Controlled Quantum Secure Direct Communication Protocol Based on Extended Three-Particle GHZ State Decoy," Network-Based Information Systems (NBiS), 2014 17th International Conference on vol., no., pp. 450, 454, 10-12 Sept. 2014. doi:10.1109/NBiS.2014.44
Abstract: Extended three-particle GHZ state decoy is introduced in controlled quantum secure direct communication to improve eavesdropping detection probability and prevent correlation-elicitation (CE) attack. Each particle of extended three-particle GHZ state decoy is inserted into sending particles to detect eavesdroppers, which reaches 63% eavesdropping detection probability. And decoy particles prevent the receiver from obtaining the correct correlation between particle 1 and particle 2 before the sender coding on them, so that he can not get any secret information without the controller's permission. In the security analysis, the maximum amount of information that a qubit contains is obtained by introducing the entropy theory method, and two decoy strategies are compared quantitatively. If the eavesdroppers intend to eavesdrop on secret information, the per qubit detection rate of using only two particles of extended three-particle GHZ state as decoy is 58%, while the presented protocol using three particles of extended three-particle GHZ state as decoy reaches per qubit 63%.
Keywords: entropy; probability; protocols; quantum communication; telecommunication security; controlled quantum secure direct communication protocol; correlation-elicitation attack; eavesdropping detection probability; entropy theory method; extended three-particle state decoy; per qubit detection rate; security analysis; Barium; Cryptography; Encoding; Protocols; Quantum mechanics; Receivers; CQSDC; decoy; eavesdropping detection; extend three-particle GHZ state; security (ID#: 15-4845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023992&isnumber=7023898

 

Jiantao Zhou; Xianming Liu; Au, O.C.; Yuan Yan Tang, "Designing an Efficient Image Encryption-Then-Compression System via Prediction Error Clustering and Random Permutation," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 1, pp. 39, 50, Jan. 2014. doi:10.1109/TIFS.2013.2291625
Abstract: In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency.
Keywords: arithmetic codes; data compression; image coding; pattern clustering; prediction theory; random codes; ETC; arithmetic coding-based approach; image encryption-then-compression system design; lossless compression; lossless image coder; lossy compression; lossy image coder; prediction error clustering; random permutation; security; Bit rate; Decoding; Encryption; Image coding; Image reconstruction; Compression of encrypted image; encrypted domain signal processing (ID#: 15-4846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670767&isnumber=6684617 

 

Jianghua Zhong; Dongdai Lin, "Stability of Nonlinear Feedback Shift Registers," Information and Automation (ICIA), 2014 IEEE International Conference on, vol., no., pp. 671, 676, 28-30 July 2014. doi:10.1109/ICInfA.2014.6932738
Abstract: Convolutional code are widely used in many applications such as digital video, radio, and mobile communication. Nonlinear feedback shift registers (NFSRs) are the main building blocks in many convolutional decoders. A decoding error may result in a successive of further decoding errors. However, a stable NFSR can limit such an error-propagation. This paper studies the stability of NFSRs using a Boolean network approach. A Boolean network is an autonomous system that evolves as an automaton through Boolean functions. An NFSR can be viewed as a Boolean network. Based on its Boolean network representation, some sufficient and necessary conditions are provided for globally (locally) stable NFSRs. To determine the global stability of an NFSR, the Boolean network approach requires lower time complexity than the exhaustive search and the Lyapunov's direct method.
Keywords: Boolean functions; automata theory; computational complexity;  shift registers; Boolean functions; Boolean network representation; Lyapunov direct method; NFSR; automaton; convolutional code; convolutional decoders; decoding error; digital video; error-propagation; exhaustive search; global stability; mobile communication; nonlinear feedback shift register stability; radio; time complexity; Boolean functions; Linear systems; Shift registers; Stability criteria; Time complexity; Transient analysis; Boolean function; Boolean network; Nonlinear feedback shift register; stability (ID#: 15-4847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932738&isnumber=6932615

 

Alodeh, M.; Chatzinotas, S.; Ottersten, B., "A Multicast Approach for Constructive Interference Precoding in MISO Downlink Channel," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2534, 2538, June 29, 2014-July 4, 2014. doi:10.1109/ISIT.2014.6875291
Abstract: This paper studies the concept of jointly utilizing the data information (DI) and channel state information (CSI) in order to design symbol-level precoders for a multiple input and single output (MISO) downlink channel. In this direction, the interference among the simultaneous data streams is transformed to useful signal that can improve the signal to interference noise ratio (SINR) of the downlink transmissions. We propose a maximum ratio transmissions (MRT) based algorithm that jointly exploits DI and CSI to gain the benefits from these useful signals. In this context, a novel framework to minimize the power consumption is proposed by formalizing the duality between the constructive interference downlink channel and the multicast channels. The numerical results have shown that the proposed schemes outperform other state of the art techniques.
Keywords: channel coding; cochannel interference; multicast communication; precoding; telecommunication channels; MISO downlink channel; SINR; channel state information; constructive interference downlink channel; constructive interference precoding; data information; downlink transmissions; maximum ratio transmissions; multicast approach; multicast channels; multiple input and single output downlink channel; power consumption; signal to interference noise ratio; simultaneous data streams; symbol-level precoders; Correlation; Downlink; Information theory; Interference; Minimization; Signal to noise ratio; Vectors (ID#: 15-4848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875291&isnumber=6874773

 

Aydin, A.; Alkhalaf, M.; Bultan, T., "Automated Test Generation from Vulnerability Signatures," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, vol., no., pp. 193, 202, March 31 2014-April 4 2014. doi:10.1109/ICST.2014.32
Abstract: Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
Keywords: Web services; authoring languages; automata theory; digital signatures; program diagnostics; program testing; attack string discovery; automata-based static string analysis techniques; automated test case generation; automatic vulnerability signature computation; insecure Web applications; path coverage; scripting languages; state; static string analysis undecidability; transition; Algorithm design and analysis; Approximation methods; Automata; Databases; HTML; Security; Testing; automata-based test generation; string analysis; validation and sanitization; vulnerability signatures (ID#: 15-4849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823881&isnumber=6823846

 

Koyluoglu, O.O.; Rawat, A.S.; Vishwanath, S., "Secure Cooperative Regenerating Codes for Distributed Storage Systems," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5228, 5244, Sept. 2014. doi:10.1109/TIT.2014.2319271
Abstract: Regenerating codes enable trading off repair bandwidth for storage in distributed storage systems (DSS). Due to their distributed nature, these systems are intrinsically susceptible to attacks, and they may also be subject to multiple simultaneous node failures. Cooperative regenerating codes allow bandwidth efficient repair of multiple simultaneous node failures. This paper analyzes storage systems that employ cooperative regenerating codes that are robust to (passive) eavesdroppers. The analysis is divided into two parts, studying both minimum bandwidth and minimum storage cooperative regenerating scenarios. First, the secrecy capacity for minimum bandwidth cooperative regenerating codes is characterized. Second, for minimum storage cooperative regenerating codes, a secure file size upper bound and achievability results are provided. These results establish the secrecy capacity for the minimum storage scenario for certain special cases. In all scenarios, the achievability results correspond to exact repair, and secure file size upper bounds are obtained using min-cut analyses over a suitable secrecy graph representation of DSS. The main achievability argument is based on an appropriate precoding of the data to eliminate the information leakage to the eavesdropper.
Keywords: precoding; security of data; storage management; DSS; DSS secrecy graph representation; data precoding; distributed storage system; eavesdropper; min-cut analysis; minimum bandwidth cooperative regenerating code; minimum storage cooperative regenerating code; Bandwidth; Decision support systems; Encoding; Maintenance engineering; Resilience; Security; Upper bound; Coding for distributed storage systems; cooperative repair; minimum bandwidth cooperative regenerating (MBCR) codes; minimum storage cooperative regenerating (MSCR) codes; security (ID#: 15-4850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807720&isnumber=6878505

 

Geil, O.; Martin, S.; Matsumoto, R.; Ruano, D.; Yuan Luo, "Relative Generalized Hamming Weights of One-Point Algebraic Geometric Codes," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 137, 141, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970808
Abstract: Security of linear ramp secret sharing schemes can be characterized by the relative generalized Hamming weights of the involved codes [23], [22]. In this paper we elaborate on the implication of these parameters and we devise a method to estimate their value for general one-point algebraic geometric codes. As it is demonstrated, for Hermitian codes our bound is often tight. Furthermore, for these codes the relative generalized Hamming weights are often much larger than the corresponding generalized Hamming weights.
Keywords: Hamming codes; algebraic geometric codes; security of data; Hermitian codes; general one-point algebraic geometric codes; linear ramp secret sharing schemes security; relative generalized Hamming weights; Cryptography; Galois fields; Geometry; Hamming weight; Linear codes; Vectors (ID#: 15-4851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970808&isnumber=6970773

 

Poonia, A.S.; Singh, S., "Malware Detection by Token Counting," Contemporary Computing and Informatics (IC3I), 2014 International Conference on, vol., no., pp. 1285, 1288, 27-29 Nov. 2014. doi:10.1109/IC3I.2014.7019691
Abstract: Malicious software (or malware) is defined as software that fulfills the harmful intent of an attacker and it is one of the most pressing and major security threats facing the Internet today. Antivirus companies typically have to deal with thousands of new malware every day. If antivirus software has large database then there is more chance of false positive and false negative, so to store the huge database in the virus definition, is very complex task. In this research paper the new concept is that, in spite of storing complete signatures of the virus, we can store the various tokens and their frequency in the program. In this process we will use only tokens of executable statements, so there is no problem if dead code in malware is also present. In the tokens we use two definitions one is operator and another is operand. So we can form new type of signature of a malware that take less size in the database and also give less negative false and positive false. The benefits of using the token concept includes; fewer databases storage memory is required; estimate size of the malicious software can be calculated; easy estimation of the complexity of the malicious program; If the malicious program has dead code or repetition of statements then also we can find accurate signature of the program by using executable statements only. So, by this process we can detect malicious code easily with less database storage memory with more precise way.
Keywords: Internet; database management systems; invasive software; Internet; antivirus software; database storage memory; dead code; executable statements; malicious program; malicious software; malware detection; malware signature; security threats; token concept; token counting; virus definition; Complexity theory; Computers; Databases; Estimation; Malware; Software; Operand; Operator; Tokens; frequency; malicious code complexity (ID#: 15-4852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019691&isnumber=7019573

 

Zonouz, S.; Rrushi, J.; McLaughlin, S., "Detecting Industrial Control Malware Using Automated PLC Code Analytics," Security & Privacy, IEEE, vol. 12, no. 6, pp. 40, 47, Nov.-Dec. 2014. doi:10.1109/MSP.2014.113
Abstract: The authors discuss their research on programmable logic controller (PLC) code analytics, which leverages safety engineering to detect and characterize PLC infections that target physical destruction of power plants. Their approach also draws on control theory, namely the field of engineering and mathematics that deals with the behavior of dynamical systems, to reverse-engineer safety-critical code to identify complex and highly dynamic safety properties for use in the hybrid code analytics approach.
Keywords: control engineering computing; industrial control; invasive software; production engineering computing; program diagnostics; programmable controllers; safety-critical software; automated PLC code analytics; control theory; hybrid code analytics approach; industrial control malware detection; programmable logic controllers; reverse-engineer safety-critical code; safety engineering; Computer security; Control systems; Energy management; Industrial control; Malware; Model checking; Process control; Reverse engineering; Safety; Safety devices; PLC code analytics; formal models; industrial control malware; model checking; process control systems; reverse engineering; safety-critical code; security (ID#: 15-4853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006408&isnumber=7006395

 

Koga, H.; Honjo, S., "A Secret Sharing Scheme Based on a Systematic Reed-Solomon Code and Analysis of Its Security for a General Class of Sources," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1351, 1355, June 29 2014 - July 4 2014. doi:10.1109/ISIT.2014.6875053
Abstract: In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SL and n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.
Keywords: Reed-Solomon codes; telecommunication security; linear equations; ramp secret sharing scheme; shortened systematic Reed-Solomon code; Cryptography; Equations; Probability distribution; Random variables; Reed-Solomon codes
(ID#: 15-4854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875053&isnumber=6874773 

 

Mokhtar, M.A.; Gobran, S.N.; El-Badawy, E.-S.A.-M., "Colored Image Encryption Algorithm Using DNA Code and Chaos Theory," Computer and Communication Engineering (ICCCE), 2014 International Conference on, vol. no., pp. 12, 15, 23-25 Sept. 2014. doi:10.1109/ICCCE.2014.17
Abstract: DNA computing and Chaos theory introduce promising research areas at the field of Cryptography. In this paper, a stream cipher algorithm for Image Encryption is introduced. The chaotic logistic map is used for confusing and diffusing the Image pixels, and then a DNA sequence used as a one-time-pad (OTP) to change pixel values. The introduced algorithm shows also perfect security as a result of using OTP and good ability to resist statistical and differential attacks.
Keywords: biocomputing; cryptography; image colour analysis; DNA code; DNA computing; DNA sequence; OTP; chaos theory; chaotic logistic map; colored image encryption algorithm; cryptography; differential attacks; image pixels; one-time-pad; stream cipher algorithm; Abstracts; Ciphers; Computers; DNA; Encryption; Logistics; PSNR; Chaos theory; DNA cryptography; Image Encryption; Logistic map; one time pad OTP; stream Cipher; symmetrical encryption (ID#: 15-4855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031588&isnumber=7031550

 

Liuyihan Song; Lei Xie; Huifang Chen; Kuang Wang, "A Feedback-Based Secrecy Coding Scheme Using Polar Code over Wiretap Channels," Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on, vol, no., pp. 1, 6, 23-25 Oct. 2014. doi:10.1109/WCSP.2014.6992177
Abstract: Polar codes can be used to achieve secrecy capacity of degraded wiretap channels. In this paper, we propose a feedback-based secrecy coding scheme using polar code over non-degraded wiretap channels. With the feedback architecture, the proposed secrecy coding scheme can significantly obtain a positive secrecy rate. Moreover, polar codes have low complexity of encoding and decoding, which is good for implementing. Simulation results show that the proposed feedback-based secrecy coding scheme using polar code can transmit confidential messages reliably and securely. Moreover, the impact of the conditions of the forward channels and feedback channels on the performance of the proposed secrecy coding scheme are analyzed.
Keywords: channel capacity; channel coding; decoding; feedback; telecommunication network reliability; telecommunication security; decoding; degraded wiretap channel; encoding; feedback architecture; feedback channel; feedback-based secrecy coding scheme; forward channel; nondegraded wiretap channel; polar code; reliability; secure communication; Channel coding; Decoding; Member and Geographic Activities Board committees; Reliability theory; Security; Polar code; feedback; non-degraded wiretap channels; secrecy code (ID#: 15-4856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992177&isnumber=6992003

 

Bin Dai; Zheng Ma, "Feedback Enhances the Security of Degraded Broadcast Channels with Confidential Messages and Causal Channel State Information," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 411, 415, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970864
Abstract: In this paper, we investigate the degraded broadcast channels with confidential messages (DBC-CM), causal channel state information (CSI), and with or without noiseless feedback. The inner and outer bounds on the capacity-equivocation region are given for the non-feedback mode, and the capacity-equivocation region is determined for the feedback model. We find that by using this noiseless feedback, the achievable rate-equivocation region (inner bound on the capacity-equivocation region) of the DBC-CM with causal CSI is enhanced.
Keywords: broadcast channels; channel capacity; channel coding; feedback telecommunication security; DBC-CM; capacity-equivocation region; channel state information; confidential messages; degraded broadcast channels; noiseless feedback; rate-equivocation region; Decoding; Joints; Random variables; Receivers; Silicon; Transmitters; Zinc; Broadcast channel; channel state information; confidential message; feedback (ID#: 15-4857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970864&isnumber=6970773

 

Abuzainab, N.; Ephremides, A., "Secure Distributed Information Exchange," Information Theory, IEEE Transactions on, vol. 60, no. 2, pp. 1126, 1135, Feb. 2014. doi:10.1109/TIT.2013.2290992
Abstract: We consider the problem of streaming a file by exchanging information over wireless channels in the presence of an eavesdropper. We utilize private and public channels and wish to minimize the use of the (more expensive) private channel subject to a required level of security. We consider both single and multiple users and compare simple ARQ and deterministic network coding as methods of transmission.
Keywords: automatic repeat request; network coding; wireless channels; deterministic network coding; exchanging information; private channels; public channels; secure distributed information exchange; simple ARQ; wireless channels; Automatic repeat request; Delays; Equations; Fading; Network coding; Security; Vectors; Privacy; QoS; energy efficiency; network coding
(ID#: 15-4858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6665039&isnumber=6714461

 

Porzio, A., "Quantum Cryptography: Approaching Communication Security from a Quantum Perspective," Photonics Technologies, 2014 Fotonica AEIT Italian Conference on, vol., no., pp. 1, 4, 12-14 May 2014. doi:10.1109/Fotonica.2014.6843831
Abstract: Quantum cryptography aims at solving the everlasting problem of unconditional security in private communication. Every time we send personal information over a telecom channel a sophisticate algorithm protect our privacy making our data unintelligible to unauthorized receivers. These protocols resulted from the long history of cryptography. The security of modern cryptographic systems is guaranteed by complexity: the computational power that would be needed for gaining info on the code key largely exceed available one. Security of actual crypto systems is not “by principle” but “practical”. On the contrary, quantum technology promises to make possible to realize provably secure protocols. Quantum cryptology exploits paradigmatic aspects of quantum mechanics, like superposition principle and uncertainty relations. In this contribution, after a brief historical introduction, we aim at giving a survey on the physical principles underlying the quantum approach to cryptography. Then, we analyze a possible continuous variable protocol.
Keywords: cryptographic protocols; data privacy; quantum cryptography; quantum theory; telecommunication security; code key; computational power; continuous variable protocol; privacy protection; quantum cryptography; quantum cryptology; quantum mechanics; quantum technology; superposition principle; uncertainty relations; unconditional private communication security; Cryptography; History; Switches; TV; Continuous Variable; Quantum cryptography (ID#: 15-4859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843831&isnumber=6843815

 

Liang Chen, "Secure Network Coding for Wireless Routing," Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 1941,1946, 10-14 June 2014. doi:10.1109/ICC.2014.6883607
Abstract: Nowadays networking is secure because we encrypt the confidential messages with the underlying assumption that adversaries in the network are computationally bounded. For traditional routing or network coding, routers know the contents of the packets they receive. Networking is not secure any more if there are eavesdroppers with infinite computational power at routers. Our concern is whether we can achieve stronger security at routers. This paper proposes secure network coding for wireless routing. Combining channel coding and network coding, this scheme can not only provide physical layer security at wireless routers but also forward data error-free at a high rate. In the paper we prove this scheme can be applied to general networks for secure wireless routing.
Keywords: channel coding; telecommunication network routing; channel coding; forward data error-free; physical layer security; secure network coding; secure wireless routing; Communication system security; Network coding; Protocols; Relays; Routing; Security; Throughput; information-theoretic secrecy; network coding; network information theory; physical-layer security; wireless routing (ID#: 15-4860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883607&isnumber=6883277

 

Thangaraj, A., "Coding for Wiretap Channels: Channel Resolvability and Semantic Security," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 232, 236, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970827
Abstract: Wiretap channels form the most basic building block of physical-layer and information-theoretic security. Considerable research work has gone into the information-theoretic, cryptographic and coding aspects of wiretap channels in the last few years. The main goal of this tutorial article is to provide a self-contained presentation of two recent results - one is a new and simplified proof for secrecy capacity using channel resolvability, and the other is the connection between semantic security and information-theoretic strong secrecy.
Keywords: channel coding; cryptography; information theory; telecommunication security; channel resolvability; coding aspects; cryptography; information-theoretic security; physical-layer; secrecy capacity; semantic security; wiretap channels coding; Cryptography; Encoding; Semantics; Standards; Zinc (ID#: 15-4861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970827&isnumber=6970773
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Coding Theory and Security, 2014, Part 2

 

 
SoS Logo

Coding Theory and Security, 2014

Part 2


Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014.  


Okamoto, K.; Homma, N.; Aoki, T.; Morioka, S., "A Hierarchical Formal Approach to Verifying Side-Channel Resistant Cryptographic Processors," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on, vol., no., pp. 76, 79, 6-7 May 2014. doi:10.1109/HST.2014.6855572
Abstract: This paper presents a hierarchical formal verification method for cryptographic processors based on a combination of a word-level computer algebra procedure and a bit-level decision procedure using PPRM (Positive Polarity Reed-Muller) expansion. In the proposed method, the entire datapath structure of a cryptographic processor is described in the form of a hierarchical graph. The correctness of the entire circuit function is verified on this graph representation, by the algebraic method, and the function of each component is verified by the PPRM method, respectively. We have applied the proposed verification method to a complicated AES (Advanced Encryption Standard) circuit with a masking countermeasure against side-channel attack. The results show that the proposed method can verify such practical circuit automatically within 4 minutes while the conventional methods fail.
Keywords: Reed-Muller codes; cryptography; digital arithmetic; formal verification; graph theory; process algebra; AES circuit; PPRM; advanced encryption standard circuit; algebraic method; bit-level decision procedure; circuit function; datapath structure; graph representation; hierarchical formal approach; hierarchical formal verification method; hierarchical graph; positive polarity Reed-Muller expansion; side-channel attack; side-channel resistant cryptographic processors; word-level computer algebra procedure; Algebra; Computers; Cryptography; Polynomials; Program processors; Resistance; Galois fields; arithmetic circuits; cryptographic processors; design methodology for secure hardware; formal design (ID#: 15-4862)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855572&isnumber=6855557

 

Zamani, S.; Javanmard, M.; Jafarzadeh, N.; Zamani, M., "A Novel Image Encryption Scheme Based on Hyper Chaotic Systems and Fuzzy Cellular Automata," Electrical Engineering (ICEE), 2014 22nd Iranian Conference on, vol., no., pp. 1136, 1141, 20-22 May 2014. doi:10.1109/IranianCEE.2014.6999706
Abstract: A new image encryption scheme based on hyper chaotic system and Fuzzy Cellular Automata is proposed in this paper. Hyper chaotic system has more complex dynamical characteristics than chaos systems. Hence it becomes a better choice for secure image encryption schemes. Four hyper chaotic systems are used to improve the security and speed of the algorithm in this approach. First, the image is divided into four sub images. Each of these sub images has its own hyper chaotic system. In shuffling phase, Pixels in the two adjacent sub images are selected for changing their positions based upon the generated numbers of their hyper chaotic systems. Five 1D non-uniform Fuzzy Cellular Automata used in encryption phase. Used rule to encrypt a cell is selected based upon cell's right neighbor. By utilization of two different encryption methods for odd and even cells, problem of being limited to recursive rules in rule selecting process in these FCAs is solved. The results of implementing this scheme on some images from USC-SIPI database, shows that our method has high security and advantages such as confusion, diffusion, and is sensitive to small changes in key.
Keywords: cellular automata; cryptography; fuzzy set theory; image coding; 1D nonuniform fuzzy cellular automata; FCA; dynamical characteristic; hyperchaotic system; image encryption; rule selecting process; shuffling phase; Automata; Chaos; Correlation; Encryption; Entropy; FCA; Hyper Chaotic System; Image encryption; Lorenz System; Non-uniform Cellular Automata (ID#: 15-4863)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999706&isnumber=6999486

 

Baheti, A.; Singh, L.; Khan, A.U., "Proposed Method for Multimedia Data Security Using Cyclic Elliptic Curve, Chaotic System, and Authentication Using Neural Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, vol., no., pp. 664, 668, 7-9 April 2014. doi:10.1109/CSNT.2014.139
Abstract: As multimedia applications are used increasingly, security becomes an important issue of security of images. The combination of chaotic theory and cryptography forms an important field of information security. In the past decade, chaos based image encryption is given much attention in the research of information security and a lot of image encryption algorithms based on chaotic maps have been proposed. But, most of them delay the system performance, security, and suffer from the small key space problem. This paper introduces an efficient symmetric encryption scheme based on a cyclic elliptic curve and chaotic system that can overcome these disadvantages. The cipher encrypts 256-bit of plain image to 256-bit of cipher image within eight 32-bit registers. The scheme generates pseudorandom bit sequences for round keys based on a piecewise nonlinear chaotic map. Then, the generated sequences are mixed with the key sequences derived from the cyclic elliptic curve points. The proposed algorithm has good encryption effect, large key space, high sensitivity to small change in secret keys and fast compared to other competitive algorithms.
Keywords: image coding; multimedia computing; neural nets; public key cryptography; authentication; chaos based image encryption; chaotic maps; chaotic system;  chaotic theory; competitive algorithms; cryptography; cyclic elliptic curve points; encryption effect; image encryption algorithms; information security; multimedia applications; multimedia data security; neural network; piecewise nonlinear chaotic map; pseudorandom bit sequences; small key space problem; system performance; Authentication; Chaotic communication; Elliptic curves; Encryption; Media; Multimedia communication; authentication; chaos; decryption; encryption; neural network (ID#: 15-4864)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821481&isnumber=6821334

 

Lashgari, S.; Avestimehr, A.S., "Blind Wiretap Channel with Delayed CSIT," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 36, 40, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874790
Abstract: We consider the Gaussian wiretap channel with a transmitter, a legitimate receiver, and k eavesdroppers (k ∈ ℕ), where the secure communication is aided via a jammer. We focus on the setting where the transmitter and the jammer are blind with respect to the state of channels to eavesdroppers, and only have access to delayed channel state information (CSI) of the legitimate receiver, which is referred to as “blind cooperative wiretap channel with delayed CSIT.” We show that a strictly positive secure Degrees of Freedom (DoF) of 1 over 3 is achievable irrespective of the number of eavesdroppers (k) in the network, and further, 1 over 3 is optimal assuming linear coding strategies at the transmitters. The converse proof is based on two key lemmas. The first lemma, named Rank Ratio Inequality, shows that if two distributed transmitters employ linear strategies, the ratio of the dimensions of received linear sub-spaces at the two receivers cannot exceed 3/2, due to delayed CSI. The second lemma implies that once the transmitters in a network have no CSI with respect to a receiver, the least amount of alignment will occur at that receiver, meaning that transmit signals will occupy the maximal signal dimensions at that receiver. Finally, we show that once the transmitter and the jammer form a single transmitter with two antennas, which we refer to as MISO wiretap channel, 1 over 2 is the optimal secure DoF when using linear schemes.
Keywords: Gaussian channels; jamming; linear codes; radio transceivers; telecommunication security; transmitting antennas; CSI; DoF; Gaussian wiretap channel; MISO wiretap channel; antennas; blind cooperative wiretap channel; communication security; degrees of freedom; delayed CSIT; delayed channel state information; distributed transmitter; eavesdroppers; jammer; key lemmas; legitimate receiver; linear coding strategy; linear subspaces; rank ratio inequality; signal dimensions; transmit signals; Encoding; Jamming; Noise; Receivers; Transmitters; Vectors (ID#: 15-4865)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874790&isnumber=6874773

 

Kochman, Y.; Ligong Wang; Wornell, G.W., "Toward Photon-Efficient Key Distribution over Optical Channels," Information Theory, IEEE Transactions on, vol. 60, no. 8, pp. 4958, 4972, Aug. 2014. doi:10.1109/TIT.2014.2331060
Abstract: This paper considers the distribution of a secret key over an optical (bosonic) channel in the regime of high photon efficiency, i.e., when the number of secret key bits generated per detected photon is high. While, in principle, the photon efficiency is unbounded, there is an inherent tradeoff between this efficiency and the key generation rate (with respect to the channel bandwidth). We derive asymptotic expressions for the optimal generation rates in the photon-efficient limit, and propose schemes that approach these limits up to certain approximations. The schemes are practical, in the sense that they use coherent or temporally entangled optical states and direct photodetection, all of which are reasonably easy to realize in practice, in conjunction with off-the-shelf classical codes.
Keywords: approximation theory; private key cryptography; quantum cryptography; quantum entanglement; approximations; asymptotic expressions; bosonic channel; channel bandwidth; coherent entangled optical states; direct photodetection; key generation rate; off-the-shelf classical codes; optical channels; optimal generation rates; photon-efficient key distribution; secret key distribution; temporally entangled optical states; Hilbert space; Optical receivers; Optical sensors; Photonics; Protocols; Quantum entanglement; Information-theoretic security; key distribution; optical communication; wiretap channel (ID#: 15-4866)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835214&isnumber=6851961

 

Fuchun Lin; Cong Ling; Belfiore, J.-C., "Secrecy Gain, Flatness Factor, and Secrecy-Goodness of Even Unimodular Lattices," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 971, 975, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874977
Abstract: Nested lattices Ae ⊂ Ab have previously been studied for coding in the Gaussian wiretap channel and two design criteria, namely, the secrecy gain and flatness factor, have been proposed to study how the coarse lattice Ae should be chosen so as to maximally conceal the message against the eavesdropper. In this paper, we study the connection between these two criteria and show the secrecy-goodness of even unimodular lattices, which means exponentially vanishing flatness factor as the dimension grows.
Keywords: Gaussian channels; encoding; Gaussian wiretap channel; coding; flatness factor; secrecy gain; secrecy-goodness; unimodular lattices; Educational institutions; Encoding; Lattices; Security; Vectors; Zinc (ID#: 15-4867)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874977&isnumber=6874773

 

Renna, F.; Laurenti, N.; Tomasin, S., "Achievable Secrecy Rates over MIMOME Gaussian Channels with GMM Signals in Low-Noise Regime," Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934464
Abstract: We consider a wiretap multiple-input multiple-output multiple-eavesdropper (MIMOME) channel, where agent Alice aims at transmitting a secret message to agent Bob, while leaking no information on it to an eavesdropper agent Eve. We assume that Alice has more antennas than both Bob and Eve, and that she has only statistical knowledge of the channel towards Eve. We focus on the low-noise regime, and assess the secrecy rates that are achievable when the secret message determines the distribution of a multivariate Gaussian mixture model (GMM) from which a realization is generated and transmitted over the channel. In particular, we show that if Eve has fewer antennas than Bob, secret transmission is always possible at low-noise. Moreover, we show that in the low-noise limit the secrecy capacity of our scheme coincides with its unconstrained capacity, by providing a class of covariance matrices that allow to attain such limit without the need of wiretap coding.
Keywords: Gaussian channels; Gaussian processes; MIMO communication; covariance matrices; GMM signals; MIMOME Gaussian channels; achievable secrecy rates; covariance matrices; low-noise regime; multivariate Gaussian mixture model; secrecy capacity; secret transmission; statistical knowledge; wiretap multiple-input multiple-output multiple-eavesdropper channel; Antennas; Covariance matrices; Encoding; Entropy; Gaussian distribution; Signal to noise ratio; Vectors; Physical Layer Security; Secrecy Capacity; multiple-input multiple-output multiple-eavesdropper (MIMOME) Channels (ID#: 15-4868)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934464&isnumber=6934393

 

James, S.P.; George, S.N.; Deepthi, P.P., "An Audio Encryption Technique Based on LFSR Based Alternating Step Generator," Electronics, Computing and Communication Technologies (IEEE CONECCT), 2014 IEEE International Conference on, vol., no., pp. 1, 6, 6-7 Jan. 2014. doi:10.1109/CONECCT.2014.6740185
Abstract: In this paper, a novel method of encrypting the encoded audio data based on LFSR based key stream generators is presented. The alternating step generator (ASG) is selected as keystream generator used for this application. Since the ASG is vulnerable to improved linear consistency attack, it is proposed to incorporate some nonlinearity with the stop/go LFSRs of the ASG so that the modified ASG can withstand the same. In the proposed approach, the selected bits of each frame of the encoded audio data is encrypted with the keystream generated by the modified ASG. In order to overcome known plaintext attack, it is proposed to use different keystreams for different frames of the audio data. The long keystream generated from the modified ASG is divided into smaller keystreams so that it can be used as the different keystreams for different frames of audio data. The proposed encryption approach can be applied to any audio coding system, which maintains the standard compatibility. The number of encrypted bits control the degree of degradation of the audio quality. The performance of proposed encryption method is verified with MP3 coded audio data and is proved that it can provide better security than the existing ones with very less system complexity.
Keywords: audio coding; cryptography; ASG; LFSR-based alternating step generator; LFSR-based key stream generators; MP3 coded audio data; alternating step generator; audio coding system; audio data frames; audio quality; encoded audio data encryption technique; improved linear consistency attack; known plaintext attack; modified ASG; standard compatibility; stop-go LFSR; system complexity; Clocks; Complexity theory; Cryptography; Filtering; Linearity; Zinc (ID#: 15-4869)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740185&isnumber=6740167

 

Pujari, V.G.; Khot, S.R.; Mane, K.T., "Enhanced Visual Cryptography Scheme for Secret Image Retrieval Using Average Filter," Wireless Computing and Networking (GCWCN), 2014 IEEE Global Conference on, vol., no., pp. 88, 91, 22-24 Dec. 2014. doi:10.1109/GCWCN.2014.7030854
Abstract: Visual cryptography is one of the emerging technology which has been used for sending secret images in highly secure manner without performing the complex operations while encoding. This technology can be used in the many fields like transferring military data, financial scan documents, sensitive image data and so on. In the literature different methods are used for black and white image which produce good result but for color images the quality of the decoded secret image is not good. In this paper, the system has been proposed which increase the quality of color decoded image. In this system sender takes one secret image which is encoded into n share images using Jarvis halftoning and encoding table. For decoding, the share images are used with decoding table to get original secret image. The average filter has been applied to decrease the noise introduced between encoding operation so that decoded secret image quality has been increased. The result analysis has been made by considering various image quality analysis parameters such as MSE, PSNR, SC, NAE and so on. The results are better than previous systems which are mentioned in the literature.
Keywords: cryptography; filtering theory; image coding; image colour analysis; image denoising; image retrieval; Jarvis halftoning; average filter; black image; color decoded image quality; decoded secret image quality; encoding table; enhanced visual cryptography scheme; image quality analysis parameters; secret image retrieval; white image; Cryptography; Decoding; Image coding; Image color analysis; Image quality; Noise; Visualization; Average filter; Color halftoning; Decoding; Encoding; Jarvis error diffusion; Security; Visual cryptography (ID#: 15-4870)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030854&isnumber=7030833

 

Wentao Huang; Ho, T.; Langberg, M.; Kliewer, J., "Reverse Edge Cut-Set Bounds for Secure Network Coding," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 106, 110, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874804
Abstract: We consider the problem of secure communication over a network in the presence of wiretappers. We give a new cut-set bound on secrecy capacity which takes into account the contribution of both forward and backward edges crossing the cut, and the connectivity between their endpoints in the rest of the network. We show the bound is tight on a class of networks, which demonstrates that it is not possible to find a tighter bound by considering only cut-set edges and their connectivity.
Keywords: network coding; telecommunication security; cut-set edges; reverse edge cut-set bounds; secrecy capacity; secure communication; secure network coding; wiretappers; Delays; Entropy; Mutual information; Network coding; Unicast; Upper bound (ID#: 15-4871)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874804&isnumber=6874773

 

Kosut, O.; Lang Tong; Tse, D.N.C., "Polytope Codes Against Adversaries in Networks," Information Theory, IEEE Transactions on, vol. 60, no. 6, pp. 3308, 3344, June 2014. doi:10.1109/TIT.2014.2314642
Abstract: This paper investigates a network coding problem wherein an adversary controls a subset of nodes in the network of limited quantity but unknown location. This problem is shown to be more difficult than that of an adversary controlling a given number of edges in the network, in that linear codes are insufficient. To solve the node problem, the class of polytope codes is introduced. Polytope codes are constant composition codes operating over bounded polytopes in integer vector fields. The polytope structure creates additional complexity, but it induces properties on marginal distributions of code vectors so that validities of codewords can be checked by internal nodes of the network. It is shown that polytope codes achieve a cut-set bound for a class of planar networks. It is also shown that this cut-set bound is not always tight, and a tighter bound is given for an example network.
Keywords: linear codes; network coding; adversary controlling; adversary controls; code vectors; codewords; constant composition codes; integer vector fields; internal nodes; linear codes; network adversaries; network coding problem; polytope codes; Educational institutions; Linear codes; Network coding; Upper bound; Vectors; Xenon; Active adversaries; Byzantine attack; network coding; network error correction; nonlinear codes; polytope codes; security (ID#: 15-4872)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781646&isnumber=6816018

 

Xiang He; Yener, A., "Providing Secrecy with Structured Codes: Two-User Gaussian Channels," Information Theory, IEEE Transactions on, vol. 60, no. 4, pp. 2121, 2138, April 2014. doi:10.1109/TIT.2014.2298132
Abstract: Recent results have shown that structured codes can be used to construct good channel codes, source codes, and physical layer network codes for Gaussian channels. For Gaussian channels with secrecy constraints, however, efforts to date rely on Gaussian random codes. In this paper, we advocate that structure in random code generation is useful for providing secrecy as well. In particular, a Gaussian wiretap channel in the presence of a cooperative jammer is studied. Previously, the achievable secrecy rate for this channel was derived using Gaussian signaling, which saturated at high signal-to-noise ratio (SNR), owing to the fact that the cooperative jammer simultaneously helps by interfering with the eavesdropper, and hurts by interfering with the intended receiver. In this paper, a new achievable rate is derived through imposing a lattice structure on the signals transmitted by both the source and the cooperative jammer, which are aligned at the eavesdropper but remain separable at the intended receiver. We prove that the achieved secrecy rate does not saturate at high SNR for all values of channel gains except when the channel is degraded.
Keywords: Gaussian channels; cooperative communication; jamming; random codes; telecommunication security; Gaussian channels; Gaussian random codes; Gaussian signaling; Gaussian wiretap channel; channel codes; cooperative jammer; eavesdropper; lattice structure; physical layer network codes; random code generation; secrecy constraints; source codes; structured codes; Channel models; Encoding; Jamming; Lattices; Receivers; Transmitters; Vectors; Gaussian wiretap channels; Information theoretic secrecy; cooperative jamming; lattice codes (ID#: 15-4873)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702446&isnumber=6766686

 

Boche, H.; Schaefer, R.F.; Poor, H.V., "On Arbitrarily Varying Wiretap Channels for Different Classes of Secrecy Measures," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2376, 2380, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875259
Abstract: The wiretap channel models secure communication in the presence of an eavesdropper who must be kept ignorant of transmitted messages. In this paper, the arbitrarily varying wiretap channel (AVWC), in which the channel may vary in an unknown and arbitrary manner from channel use to channel use, is considered. For arbitrarily varying channels (AVCs) the capacity might differ depending on whether deterministic or common randomness (CR) assisted codes are used. The AVWC has been studied for both coding strategies and the relation between the corresponding secrecy capacities has been established. However, a characterization of the CR-assisted secrecy capacity itself or even a general CR-assisted achievable secrecy rate remain open in general for weak and strong secrecy. Here, the secrecy measure of high decoding error at the eavesdropper is considered, where the eavesdropper is further assumed to know channel states and to adapt its decoding strategy accordingly. For this secrecy measure a general CR-assisted achievable secrecy rate is established. The relation between secrecy capacities for different secrecy measures is discussed: The weak and strong secrecy capacities are smaller than or equal to the one for high decoding error. It is conjectured that this relation can be strict for certain channels.
Keywords: channel coding; decoding; telecommunication security; AVWC; CR-assisted achievable secrecy rate; CR-assisted secrecy capacity; arbitrarily varying wiretap channels; common randomness assisted codes; decoding error; secrecy measures; secure communication; Compounds; Decoding; Measurement uncertainty; Robustness; Security; Tin (ID#: 15-4874)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875259&isnumber=6874773

 

Fan Cheng; Yeung, R.W., "Performance Bounds on a Wiretap Network with Arbitrary Wiretap Sets," Information Theory, IEEE Transactions on, vol. 60, no.6, pp. 3345, 3358, June 2014. doi:10.1109/TIT.2014.2315821
Abstract: Consider a communication network represented by a directed graph G = (V, ε), where V is the set of nodes and 8 is the set of point-to-point channels in the network. On the network, a secure message M is transmitted, and there may exist wiretappers who want to obtain information about the message. In secure network coding, we aim to find a network code, which can protect the message against the wiretapper whose power is constrained. Cai and Yeung studied the model in which the wiretapper can access any one but not more than one set of channels, called a wiretap set, out of a collection A of all possible wiretap sets. In order to protect the message, the message needs to be mixed with a random key K. They proved tight fundamental performance bounds when A consists of all subsets of ε of a fixed size r. However, beyond this special case, obtaining such bounds is much more difficult. In this paper, we investigate the problem when A consists of arbitrary subsets of ε and obtain the following results: 1) an upper bound on H(M) and 2) a lower bound on H(K) in terms of H(M). The upper bound on H(M) is explicit, while the lower bound on H(K) can be computed in polynomial time when |A| is fixed. The tightness of the lower bound for the point-to-point communication system is also proved.
Keywords: network coding; polynomials; radio networks; telecommunication security; Cai; Yeung; arbitrary wiretap sets; communication network; network code; performance bounds; point-to-point channels; polynomial time; random key; secure message; secure network coding; wiretap network; wiretapper; Cryptography; Encoding; Entropy; Network coding; Receivers; Upper bound; Information inequality; perfect secrecy; performance bounds; secure network coding (ID#: 15-4875)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783737&isnumber=6816018

 

Bin Duo; Peng Wang; Yonghui Li; Vucetic, B., "Secure Transmission for Relay-Eavesdropper Channels Using Polar Coding," Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 2197, 2202, 10-14 June 2014. doi:10.1109/ICC.2014.6883649
Abstract: In this paper, we propose a practical transmission scheme using polar coding for the half-duplex degraded relay-eavesdropper channel. We prove that the proposed scheme can achieve the maximum perfect secrecy rate under the decode-and-forward (DF) strategy. Our proposed scheme provides an approach for ensuring both reliable and secure transmission over the relay-eavesdropper channel while enjoying practically feasible encoding/decoding complexity.
Keywords: channel coding; decode and forward communication; decoding; reliability; telecommunication security; wireless channels; DF strategy; decode and forward strategy; decoding complexity; half-duplex degraded relay eavesdropper channel; polar coding; reliable transmission; secure transmission; Complexity theory; Decoding; Encoding; Relays ;Reliability; Variable speed drives; Vectors (ID#: 15-4876)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883649&isnumber=6883277

 

Loyka, S.; Charalambous, C.D., "Rank-Deficient Solutions for Optimal Signaling over Secure MIMO Channels," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp.  201, 205, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874823
Abstract: Capacity-achieving signaling strategies for the Gaussian wiretap MIMO channel are investigated without the degradedness assumption. In addition to known solutions, a number of new rank-deficient solutions for the optimal transmit covariance matrix are obtained. The case of weak eavesdropper is considered in details and the optimal covariance is established in an explicit, closed-form with no extra assumptions. The conditions for optimality of zero-forcing signaling are established, and the standard water-filling is shown to be optimal under those conditions. No wiretap codes are needed in this case. The case of identical right singular vectors for the required and eavesdropper channels is studied and the optimal covariance is established in an explicit closed form. As a by-product of this analysis, we establish a generalization of celebrated Hadamard determinantal inequality using information-theoretic tools.
Keywords: Gaussian channels; MIMO communication; covariance matrices; telecommunication security; telecommunication signalling; Gaussian wiretap MIMO channel; capacity-achieving signaling strategies; celebrated Hadamard determinantal inequality; eavesdropper channels; identical right singular vectors; information-theoretic tools; optimal covariance; optimal signaling; optimal transmit covariance matrix; rank-deficient solutions; secure MIMO channels; standard water-filling; weak eavesdropper; wiretap codes; zero-forcing signaling; Approximation methods; Covariance matrices; Information theory; MIMO; Signal to noise ratio; Standards; Vectors (ID#: 15-4877)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874823&isnumber=6874773

 

Cong Ling; Luzzi, L.; Belfiore, J.-C.; Stehle, D., "Semantically Secure Lattice Codes for the Gaussian Wiretap Channel," Information Theory, IEEE Transactions on, vol. 60, no.10, pp. 6399, 6416, Oct. 2014. doi:10.1109/TIT.2014.2343226
Abstract: We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor, which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in our proposed schemes.
Keywords: Gaussian channels; codes; telecommunication security; Gaussian wiretap channel; conditional output distribution; discrete Gaussian distribution; flatness factor; genuine Gaussian channel; information leakage; modulo lattice Gaussian channel; secrecy coding; secrecy good lattice; semantically secure lattice codes; wiretap lattice coding; Encoding; Gaussian distribution; Lattices; Mutual information; Security; Semantics; Zinc; Lattice coding; information theoretic security; semantic security; strong secrecy; wiretap channel (ID#: 15-4878)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866169&isnumber=6895347

 

Mirghasemi, H.; Belfiore, J.-C., "The Semantic Secrecy Rate of the Lattice Gaussian Coding for the Gaussian Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp.112,116, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970803
Abstract: In this paper, we investigate the achievable semantic secrecy rate of existing lattice coding schemes, proposed in [6], for both the mod-Λ Gaussian wiretap and the Gaussian wiretap channels. For both channels, we propose new upper bounds on the amount of leaked information which provide milder sufficient conditions to achieve semantic secrecy. These upper bounds show that the lattice coding schemes in [6] can achieve the secrecy capacity to within ½ln e/2 nat for the mod-Λ Gaussian and to within ½(1 - ln (1 + SNRe / SNRe+1)) nat for the Gaussian wiretap channels where SNRe is the signal-to-noise ratio of Eve.
Keywords: Gaussian channels; channel capacity; data privacy; wireless channels; Gaussian wiretap channels; SNRe; lattice coding schemes; mod-Λ Gaussian wiretap; secrecy capacity; semantic secrecy rate; signal-to-noise ratio of Eve; Encoding; Gaussian distribution; Lattices; Security; Semantics; Upper bound; Zinc (ID#: 15-4879)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970803&isnumber=6970773

 

Mirghasemi, H.; Belfiore, J.-C., "The Un-Polarized Bit-Channels in the Wiretap Polar Coding Scheme," Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp.1 ,5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934465
Abstract: Polar coding theorems state that as the number of channel use, n, tends to infinity, the fraction of un-polarized bit-channels (the bit-channels whose Z parameters are in the interval (δ(n), 1- δ (n)), tends to zero. Consider two BEC channels W(z1) and W(z2). Motivated by polar coding scheme proposed for the wiretap channel, we investigate the number of bit-channels which are simultaneously un-polarized for both of W(z1) and W(z2). We show that for finite values of n, there is a considerable regime of (z1, Z2) where the set of (joint)un-polarized bit-channels is empty. We also show that for γ ≤ 1/2 and δ (n) = 2-nγ, the number of un-polarized bit-channels is lower bounded by 2γ log (n).
Keywords: encoding; security of data; Z-parameter; channel use number; unpolarized bit channel; wiretap channel; wiretap polar coding scheme; Decoding; Encoding; Mutual information; Noise measurement; Reliability; Security; Vectors (ID#: 15-4880)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934465&isnumber=6934393

 

Tomamichel, M.; Martinez-Mateo, J.; Pacher, C.; Elkouss, D., "Fundamental Finite Key Limits for Information Reconciliation in Quantum Key Distribution," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1469, 1473, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875077
Abstract: The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during one-way information reconciliation is flawed and we propose an improved estimate.
Keywords: cryptographic protocols; information theory; private key cryptography; quantum cryptography; quantum theory; QKD protocols; classical cryptography; fundamental finite key limits; information reconciliation; nonasymptotic classical information theory; quantum key distribution protocols; quantum mechanics security; secret key; Approximation methods; Error analysis; Parity check codes; Protocols; Quantum mechanics; Security (ID#: 15-4881)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875077&isnumber=6874773

 

Merhav, N., "Exact Correct-Decoding Exponent of the Wiretap Channel Decoder," Information Theory, IEEE Transactions on, vol. 60, no. 12, pp. 7606, 7615, Dec. 2014. doi:10.1109/TIT.2014.2361765
Abstract: The performance of the achievability scheme for Wyner's wiretap channel model is examined from the perspective of the probability of correct decoding, Pc, at the wiretap channel decoder. In particular, for finite-alphabet memoryless channels, the exact random coding exponent of Pc is derived as a function of the total coding rate R1 and the rate of each subcode R2. Two different representations are given for this function and its basic properties are provided. We also characterize the region of pairs of rates (R1, R2) of full security in the sense of the random coding exponent of Pc, in other words, the region where the exponent of this achievability scheme is the same as that of blind guessing at the eavesdropper side. Finally, an analogous derivation of the correct-decoding exponent is outlined for the case of the Gaussian channel.
Keywords: Gaussian channels; channel coding; decoding; probability; random codes; Gaussian channel; Wyner wiretap channel model; blind guessing; correct decoding probability; finite-alphabet memoryless channels; random coding exponent; Decoding; Encoding; Random variables; Receivers; Reliability; Security; Vectors; Wiretap channel; information–theoretic security ;information theoretic security; random coding exponent; secrecy (ID#: 15-4882)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918525&isnumber=6960944

 

Yingbin Liang; Lifeng Lai; Poor, H.V.; Shamai, S., "A Broadcast Approach for Fading Wiretap Channels," Information Theory, IEEE Transactions on, vol. 60, no.2, pp. 842, 858, Feb. 2014. doi:10.1109/TIT.2013.2293756
Abstract: A (layered) broadcast approach is studied for the fading wiretap channel without the channel state information (CSI) at the transmitter. Two broadcast schemes, based on superposition coding and embedded coding, respectively, are developed to encode information into a number of layers and use stochastic encoding to keep the corresponding information secret from an eavesdropper. The layers that can be successfully and securely transmitted are determined by the channel states to the legitimate receiver and the eavesdropper. The advantage of these broadcast approaches is that the transmitter does not need to know the CSI to the legitimate receiver and the eavesdropper, but the scheme still adapts to the channel states of the legitimate receiver and the eavesdropper. Three scenarios of block fading wiretap channels with stringent delay constraints are studied, in which either the legitimate receiver's channel, the eavesdropper's channel, or both channels are fading. For each scenario, the secrecy rate that can be achieved via the broadcast approach developed in this paper is derived, and the optimal power allocation over the layers (or the conditions on the optimal power allocation) is also characterized. A notion of probabilistic secrecy, which characterizes the probability that a certain secrecy rate of decoded messages is achieved during one block, is also introduced and studied for scenarios when the eavesdropper's channel is fading. Numerical examples are provided to demonstrate the impact of the CSI at the transmitter and the channel fluctuations of the eavesdropper on the average secrecy rate. These examples also demonstrate the advantage of the proposed broadcast approach over the compound channel approach.
Keywords: broadcast channels; decoding; embedded systems; encoding; fading channels; radio receivers; radio transmitters; resource allocation; channel fluctuations; channel states; decoded messages; eavesdropper channel; embedded coding; fading wiretap channels; layered broadcast approach; legitimate receiver; optimal power allocation; receiver channel; stochastic encoding; superposition coding; transmitter; Encoding; Fading; Indexes; Receivers; Resource management; Security; Transmitters; Channel state information; fading channel; layered broadcast approach; secrecy rate; wiretap channel (ID#: 15-4883)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687232&isnumber=6714461
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Coding Theory and Security, 2014, Part 3

 

 
SoS Logo

Coding Theory and Security, 2014

Part 3


Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published in 2014. 


Xuan Guang; Jiyong Lu; Fang-Wei Fu, "Locality-Preserving Secure Network Coding," Information Theory Workshop (ITW), 2014 IEEE, vol. no., pp. 396, 400, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970861
Abstract: In the paradigm of network coding, when wiretapping attacks occur, secure network coding is introduced to prevent information leaking adversaries. In practical network communications, the source often multicasts messages at several different rates within a session. How to deal with information transmission and information security simultaneously under variable rates and fixed security-level is introduced in this paper as a variable-rate and fixed-security-level secure network coding problem. In order to solve this problem effectively, we propose the concept of locality-preserving secure linear network codes of different rates and fixed security-level, which have the same local encoding kernel at each internal node. We further present an approach to construct such a family of secure linear network codes and give an algorithm for efficient implementation. This approach saves the storage space for both source node and internal nodes, and resources and time on networks. Finally, the performance of the proposed algorithm is analyzed, including the field size, computational and storage complexities.
Keywords: linear codes; network coding; telecommunication security; variable rate codes; fixed-security-level secure network coding problem; information security; information transmission; internal nodes; local encoding kernel; locality-preserving secure linear network codes; source node; variable-rate secure network coding problem; wiretapping attacks; Complexity theory; Decoding; Encoding; Information rates; Kernel; Network coding; Vectors (ID#: 15-4884)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970861&isnumber=6970773

 

Watanabe, S.; Oohama, Y., "Cognitive Interference Channels with Confidential Messages Under Randomness Constraint," Information Theory, IEEE Transactions on, vol. 60, no. 12, pp. 7698, 7707, Dec. 2014. doi:10.1109/TIT.2014.2360683
Abstract: The cognitive interference channel with confidential messages (CICC) proposed by Liang et al. is investigated. When the security is considered in coding systems, it is well-known that the sender needs to use a stochastic encoding to avoid the information about the transmitted confidential message to be leaked to an eavesdropper. For the CICC, the tradeoff between the rate of the random number to realize the stochastic encoding and the communication rates is investigated, and the optimal tradeoff is completely characterized.
Keywords: Decoding; Encoding; Interference channels; Random variables; Receivers; Security; Tin; Cognitive Interference Channel; Cognitive interference channel; Confidential Messages; Randomness Constraint; Stochastic Encoder; Superposition Coding; confidential messages; randomness constraint; stochastic encoder; superposition coding (ID#: 15-4885)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928480&isnumber=6960944

 

Muramatsu, J., "General Formula for Secrecy Capacity of Wiretap Channel with Noncausal State," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 21, 25, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874787
Abstract: The coding problem for a wiretap channel with a noncausal state is investigated, where the problem includes the coding problem for a channel with a noncausal state, which is known as the Gel'fand-Pinsker problem, and the coding problem for a wiretap channel introduced by Wyner. The secrecy capacity for this channel is derived, where an optimal code is constructed based on the hash property and a constrained-random-number generator. Since an ensemble of sparse matrices has a hash property, the rate of the proposed code using a sparse matrix can achieve the secrecy capacity.
Keywords: encoding; random number generation; security of data; Gel'fand-Pinsker problem; coding problem; constrained random number generator; hash property; noncausal state; optimal code; secrecy capacity; wiretap channel; Decoding; Encoding; Manganese; Random variables; Tin; Zinc (ID#: 15-4886)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874787&isnumber=6874773

 

Yardi, A.D.; Kumar, A.; Vijayakumaran, S., "Channel-Code Detection by a Third-Party Receiver via the Likelihood Ratio Test," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1051, 1055, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874993
Abstract: Channel codebook detection is of interest in cognitive paradigm or security applications. A binary hypothesis testing problem is considered, where a receiver has to detect the channel-code from two possible choices upon observing noise-affected codewords through a communication channel. For analytical tractability, it is assumed that the two channel-codes are linear block codes with identical block-length. In a first, this work studies the likelihood ratio test for minimizing the error probability in this detection problem. In an asymptotic setting, where a large number of noise-affected codewords are available for detection, the Chernoff information characterizes the error probability. A lower bound on the Chernoff information, based on the parameters of the two hypothesis, is established. Further, it is shown that if likelihood based efficient (generalized distributive law or BCJR) bit-decoding algorithms are available for the two codes, then the likelihood ratio test for the code-detection problem can be performed in a computationally feasible manner.
Keywords: block codes; channel coding; cognitive radio; error statistics; linear codes; statistical analysis; telecommunication security; BCJR; Chernoff information; analytical tractability; binary hypothesis testing problem; bit-decoding algorithms; channel codebook detection; cognitive paradigm; communication channel; error probability minimization; generalized distributive law; identical block-length; likelihood ratio test; linear block codes; noise-affected codewords; security application; third-party receiver; Block codes; Error probability; Noise; Receivers; Testing; Vectors (ID#: 15-4887)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874993&isnumber=6874773

 

Villard, J.; Piantanida, P.; Shamai, S., "Secure Transmission of Sources over Noisy Channels with Side Information at the Receivers," Information Theory, IEEE Transactions on, vol. 60, no. 1, pp. 713, 739, Jan. 2014. doi:10.1109/TIT.2013.2288256
Abstract: This paper investigates the problem of source-channel coding for secure transmission with arbitrarily correlated side informations at both receivers. This scenario consists of an encoder (referred to as Alice) that wishes to compress a source and send it through a noisy channel to a legitimate receiver (referred to as Bob). In this context, Alice must simultaneously satisfy the desired requirements on the distortion level at Bob and the equivocation rate at the eavesdropper (referred to as Eve). This setting can be seen as a generalization of the problems of secure source coding with (uncoded) side information at the decoders and the wiretap channel. A general outer bound on the rate-distortion-equivocation region, as well as an inner bound based on a pure digital scheme, is derived for arbitrary channels and side informations. In some special cases of interest, it is proved that this digital scheme is optimal and that separation holds. However, it is also shown through a simple counterexample with a binary source that a pure analog scheme can outperform the digital one while being optimal. According to these observations and assuming matched bandwidth, a novel hybrid digital/analog scheme that aims to gather the advantages of both digital and analog ones is then presented. In the quadratic Gaussian setup when side information is only present at the eavesdropper, this strategy is proved to be optimal. Furthermore, it outperforms both digital and analog schemes and cannot be achieved via time-sharing. Through an appropriate coding, the presence of any statistical difference among the side informations, the channel noises, and the distortion at Bob can be fully exploited in terms of secrecy.
Keywords: Gaussian channels; combined source-channel coding; receivers; telecommunication security; binary source; channel noise; eavesdropper; encoder; hybrid digital-analog scheme; quadratic Gaussian setup; rate-distortion-equivocation region; receiver; source coding security; source-channel coding; statistical difference; time-sharing; transmission security; wiretap channel; Channel coding; Decoding; Noise measurement; Radio frequency; Random variables; Source coding; Combined source-channel coding; Gaussian channels; information security; rate-distortion; side information (ID#: 15-4888)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651774&isnumber=6690264

 

Muxi Yan; Sprintson, A.; Zelenko, I., "Weakly Secure Data Exchange with Generalized Reed Solomon Codes," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1366, 1370, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875056
Abstract: We focus on secure data exchange among a group of wireless clients. The clients exchange data by broadcasting linear combinations of packets over a lossless channel. The data exchange is performed in the presence of an eavesdropper who has access to the channel and can obtain all transmitted data. Our goal is to develop a weakly secure coding scheme that prevents the eavesdropper from being able to decode any of the original packets held by the clients. We present a randomized algorithm based on Generalized Reed-Solomon (GRS) codes. The algorithm has two key advantages over the previous solutions: it operates over a small (polynomial-size) finite field and provides a way to verify that constructed code is feasible. In contrast, the previous approaches require exponential field size and do not provide an efficient (polynomial-time) algorithm to verify the secrecy properties of the constructed code. We formulate an algebraic-geometric conjecture that implies the correctness of our algorithm and prove its validity for special cases. Our simulation results indicate that the algorithm is efficient in practical settings.
Keywords: Reed-Solomon codes; algebra; broadcast channels; electronic data interchange; geometry; security of data; telecommunication security; wireless channels; GRS codes; algebraic-geometric conjecture; eavesdropper prevention; exponential field size; finite field; generalized Reed Solomon codes; lossless broadcast channel; weakly secure coding scheme; weakly secure data exchange problem; wireless clients; Encoding; Network coding; Polynomials; Reed-Solomon codes; Silicon; Vectors (ID#: 15-4889)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875056&isnumber=6874773

 

Geil, O.; Martin, S.; Matsumoto, R.; Ruano, D.; Yuan Luo, "Relative Generalized Hamming Weights of One-Point Algebraic Geometric Codes," Information Theory, IEEE Transactions on, vol. 60, no. 10, pp. 5938, 5949, Oct. 2014. doi:10.1109/TIT.2014.2345375
Abstract: Security of linear ramp secret sharing schemes can be characterized by the relative generalized Hamming weights of the involved codes. In this paper, we elaborate on the implication of these parameters and devise a method to estimate their value for general one-point algebraic geometric codes. As it is demonstrated, for Hermitian codes, our bound is often tight. Furthermore, for these codes, the relative generalized Hamming weights are often much larger than the corresponding generalized Hamming weights.
Keywords: Hamming codes; algebraic geometric codes; cryptography; Hermitian codes; cryptographic method; general one-point algebraic geometric codes; linear ramp secret sharing schemes; relative generalized Hamming weights; Cryptography; Electronic mail; Hamming weight; Linear codes; Materials; Random variables; Vectors; Feng-Rao bound; Hermitian code; Linear code; one-point algebraic geometric code; relative dimension/length profile; relative generalized Hamming weight; secret sharing; wiretap channel of type II (ID#: 15-4890)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871379&isnumber=6895347

 

Tyagi, H.; Vardy, A., "Explicit Capacity-Achieving Coding Scheme for the Gaussian Wiretap Channel," Information Theory (ISIT), 2014 IEEE International Symposium on, vol. no., pp. 956, 960, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874974
Abstract: We extend the Bellare-Tessaro coding scheme for a discrete, degraded, symmetric wiretap channel to a Gaussian wiretap channel. Denoting by SNR the signal-to-noise ratio of the eavesdropper's channel, the proposed scheme converts a transmission code of rate R for the channel of the legitimate receiver into a code of rate R-0.5 log(1+SNR) for the Gaussian wiretap channel. The conversion has a polynomial complexity in the codeword length and the proposed scheme achieves strong security. In particular, when the underlying transmission code is capacity achieving, this scheme achieves the secrecy capacity of the Gaussian wiretap channel.
Keywords: Gaussian channels; channel capacity; channel coding; telecommunication security; Bellare-Tessaro coding; Gaussian wiretap channel; degraded wiretap channel; discrete wiretap channel; explicit capacity-achieving coding; secrecy capacity; symmetric wiretap channel; transmission code; Cryptography; Encoding; Receivers; Reliability; Zinc (ID#: 15-4891)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874974&isnumber=6874773

 

Matsumoto, R., "New Asymptotic Metrics for Relative Generalized Hamming Weight," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 3142, 3144, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875413
Abstract: It was recently shown that RGHW (relative generalized Hamming weight) exactly expresses the security of linear ramp secret sharing scheme. In this paper we determine the true value of the asymptotic metric for RGHW previously proposed by Zhuang et al. in 2013. Then we propose new asymptotic metrics useful for investigating the optimal performance of linear ramp secret sharing scheme constructed from a pair of linear codes. We also determine the true values of the proposed metrics in many cases.
Keywords: Hamming codes; cryptography; linear codes; RGHW; asymptotic metrics; linear codes; linear ramp secret sharing scheme; relative generalized Hamming weight; Cryptography; Equations; Hamming weight; Information rates; Linear codes; Measurement (ID#: 15-4892)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875413&isnumber=6874773

 

Khisti, A.; Tie Liu, "Private Broadcasting over Independent Parallel Channels," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5173, 5187, Sept. 2014. doi:10.1109/TIT.2014.2332336
Abstract: We study broadcasting of two confidential messages to two groups of receivers over independent parallel subchannels. One group consists of an arbitrary number of receivers, interested in a common message, whereas the other group has only one receiver. Each message must be confidential from the receiver(s) in the other group. Each of the subchannels is assumed to be degraded in a certain fashion. While corner points of the capacity region of this setup were characterized in earlier works, we establish the complete capacity region, and show the optimality of a superposition coding technique. For Gaussian channels, we establish the optimality of a Gaussian input distribution by applying an extremal information inequality. By extending our coding scheme to block-fading channels, we demonstrate significant performance gains over a baseline time-sharing scheme.
Keywords: Gaussian channels; block codes; data privacy; fading channels; radio receivers; telecommunication security; wireless channels; Gaussian channels; Gaussian input distribution; baseline time sharing scheme; block fading channels; coding scheme; extremal information inequality; independent parallel channels; independent parallel subchannels; private broadcasting; receivers; superposition coding technique; Broadcasting; Channel models; Coherence; Encoding; Fading; Indexes; Receivers; Wiretap channel; parallel channels; private broadcasting; secrecy capacity; superposition coding (ID#: 15-4893)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841612&isnumber=6878505

 

Mirzaee, M.; Akhlaghi, S., "Maximizing the Minimum Achievable Secrecy Rate in a Two-User Gaussian Interference Channel," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp. 1, 5, 7 – 8 May 2014. doi:10.1109/IWCIT.2014.6842501
Abstract: This paper studies a two-user Gaussian interference channel in which two single-antenna sources aim at sending their confidential messages to the legitimate destinations such that each message should be kept confidential from non-intended receiver. Also, it is assumed that the direct channel gains are stronger than the interference channel gains and the noise variances at two destinations are equal. In this regard, under Gaussian code book assumption, the problem of secrecy rate balancing which aims at exploring the optimal power allocation policy at the sources in an attempt to maximize the minimum achievable secrecy rate is investigated, assuming each source is subject to a transmit power constraint. To this end, it is shown that at the optimal point, two secrecy rates are equal, hence, the problem is abstracted to maximizing the secrecy rate associated with one of destinations while the other destination is restricted to have the same secrecy rate. Accordingly, the optimum secrecy rate associated with the investigated max-min problem is analytically derived leading to the solution of secrecy rate balancing problem.
Keywords: Gaussian channels; antennas; interference (signal); telecommunication security; Gaussian code book assumption; achievable secrecy rate; direct channel gains; interference channel gains; max-min problem; noise variances; nonintended receiver; optimal power allocation policy; secrecy rate balancing; single-antenna sources; transmit power constraint; two-user Gaussian interference channel; Array signal processing; Gain; Interference channels; Linear programming; Noise; Optimization; Transmitters; Achievable secrecy rate; Gaussian interference channel; Max-Min problem (ID#: 15-4894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842501&isnumber=6842477

 

Pengwei Wang; Safavi-Naini, R., "An Efficient Code for Adversarial Wiretap Channel," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 40, 44, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970788
Abstract: In the (ρr, ρw)-adversarial wiretap (AWTP) channel model of [13], a codeword sent over the communication channel is corrupted by an adversary who observes a fraction ρr of the codeword, and adds noise to a fraction ρw of the codeword. The adversary is adaptive and chooses the subsets of observed and corrupted components, arbitrarily. In this paper we give the first efficient construction of a code family that provides perfect secrecy in this model, and achieves the secrecy capacity.
Keywords: channel coding; telecommunication security; wireless channels; AWTP channel model; adversarial wiretap channel model; code family; codeword; communication channel; secrecy capacity; Computational modeling; Decoding; Encoding; Reed-Solomon codes; Reliability; Security; Vectors (ID#: 15-4895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970788&isnumber=6970773

 

Son Hoang Dau; Wentu Song; Chau Yuen, "On Block Security of Regenerating Codes at the MBR Point for Distributed Storage Systems," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 1967, 1971, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875177
Abstract: A passive adversary can eavesdrop stored content or downloaded content of some storage nodes, in order to learn illegally about the file stored across a distributed storage system (DSS). Previous work in the literature focuses on code constructions that trade storage capacity for perfect security. In other words, by decreasing the amount of original data that it can store, the system can guarantee that the adversary, which eavesdrops up to a certain number of storage nodes, obtains no information (in Shannon's sense) about the original data. In this work we introduce the concept of block security for DSS and investigate minimum bandwidth regenerating (MBR) codes that are block secure against adversaries of varied eavesdropping strengths. Such MBR codes guarantee that no information about any group of original data units up to a certain size is revealed, without sacrificing the storage capacity of the system. The size of such secure groups varies according to the number of nodes that the adversary can eavesdrop. We show that code constructions based on Cauchy matrices provide block security. The opposite conclusion is drawn for codes based on Vandermonde matrices.
Keywords: codes; distributed processing; matrix algebra; security of data; storage management; Cauchy matrices; DSS; MBR codes; MBR point; Vandermonde matrices; block security; code constructions; distributed storage systems; minimum bandwidth regenerating codes; passive adversary; storage capacity; Decision support systems; Degradation; Encoding; Maintenance engineering; Network coding; Security (ID#: 15-4896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875177&isnumber=6874773

 

Jinlong Lu; Harshan, J.; Oggier, F., "A USRP Implementation of Wiretap Lattice Codes," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 316, 320, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970845
Abstract: A wiretap channel models a communication channel between a legitimate sender Alice and a legitimate receiver Bob in the presence of an eavesdropper Eve. Confidentiality between Alice and Bob is obtained using wiretap codes, which exploit the difference between the channels to Bob and to Eve. This paper discusses a first implementation of wiretap lattice codes using USRP (Universal Software Radio Peripheral), which focuses on the channel between Alice and Eve. Benefits of coset encoding for Eve's confusion are observed, using different lattice codes in small dimensions, and varying the position of the eavesdropper.
Keywords: channel coding; software radio; telecommunication security; USRP implementation; communication channel; coset encoding; eavesdropper; universal software radio peripheral; wiretap channel models; wiretap lattice codes; Baseband; Decoding; Encoding; Lattices; Receivers; Security; Signal to noise ratio (ID#: 15-4897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970845&isnumber=6970773

 

Jinjing Jiang; Marukala, N.; Tie Liu, "Symmetrical Multilevel Diversity Coding and Subset Entropy Inequalities," Information Theory, IEEE Transactions on, vol. 60, no. 1, pp. 84,103, Jan. 2014. doi:10.1109/TIT.2013.2288263
Abstract: Symmetrical multilevel diversity coding (SMDC) is a classical model for coding over distributed storage. In this setting, a simple separate encoding strategy known as superposition coding was shown to be optimal in terms of achieving the minimum sum rate and the entire admissible rate region of the problem. The proofs utilized carefully constructed induction arguments, for which the classical subset entropy inequality played a key role. This paper consists of two parts. In the first part, the existing optimality proofs for classical SMDC are revisited, with a focus on their connections to subset entropy inequalities. Initially, a new sliding-window subset entropy inequality is introduced and then used to establish the optimality of superposition coding for achieving the minimum sum rate under a weaker source-reconstruction requirement. Finally, a subset entropy inequality recently proved by Madiman and Tetali is used to develop a new structural understanding of the work of Yeung and Zhang on the optimality of superposition coding for achieving the entire admissible rate region. Building on the connections between classical SMDC and the subset entropy inequalities developed in the first part, in the second part the optimality of superposition coding is extended to the cases where there is either an additional all-access encoder or an additional secrecy constraint.
Keywords: codecs; encoding; entropy codes; SMDC; admissible rate region; all-access encoder; distributed storage; encoding strategy; secrecy constraint; sliding-window subset entropy inequality; source-reconstruction requirement; subset entropy inequalities sum rate; superposition coding; symmetrical multilevel diversity coding; Clocks; Decoding; Electronic mail; Encoding; Entropy; Indexes; Tin; Distributed storage; information-theoretic security; multilevel diversity coding; subset entropy inequality (ID#: 15-4898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651781&isnumber=6690264

 

Chen, Yanling; Koyluoglu, O.Ozan; Sezgin, Aydin, "On the Achievable Individual-Secrecy Rate Region for Broadcast Channels with Receiver Side Information," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 26, 30, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874788
Abstract: In this paper, we study the problem of secure communication over the broadcast channel with receiver side information, under the lens of individual secrecy constraints (i.e., information leakage from each message to an eavesdropper is made vanishing). Several coding schemes are proposed by extending known results in broadcast channels to this secrecy setting. In particular, individual secrecy provided via one-time pad signal is utilized in the coding schemes. As a result, we obtain an achievable rate region together with a characterization of the capacity region for special cases of either a weak or strong eavesdropper (compared to both legitimate receivers). Interestingly, the capacity region for the former corresponds to a line and the latter corresponds to a square with missing corners; a phenomenon occurring due to the coupling between user's rates. At the expense of having a weaker notion of security, positive secure transmission rates are always guaranteed, unlike the case of the joint secrecy constraint.
Keywords:  (not provided) (ID#: 15-4899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874788&isnumber=6874773

 

Tao Ye; Veitch, D.; Johnson, S., "RA-Inspired Codes for Efficient Information Theoretic Multi-Path Network Security," Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp.408, 412, 26-29 Oct. 2014. doi: (not provided)
Abstract: Mobile devices have multiple network interfaces, some of which have security weaknesses, yet are used for sensitive data despite the risk of eavesdropping. We describe a data-splitting approach which, by design, maps exactly to a wiretap channel, thereby offering information theoretic security. Being based on the deletion channel, it perfectly hides block boundaries from the eavesdropper, which enhances security further. We provide an efficient Repeat Accumulate inspired code design, which satisfies the security criterion, and explore its security rate as a function block size and other parameters.
Keywords: codes; information theory; security of data; telecommunication security; RA-inspired codes; data-splitting approach; deletion channel; eavesdropper; eavesdropping; function block size; information theoretic multipath network security; mobile devices; multiple network interfaces; repeat accumulate inspired code design; security criterion; security rate; security weaknesses; sensitive data; wiretap channel; Australia; Decoding; Encoding; Generators; Parity check codes; Security; Vectors (ID#: 15-4900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979875&isnumber=6979787

 

Li-Chia Choo; Cong Ling, "Superposition Lattice Coding for Gaussian Broadcast Channel with Confidential Message," Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 311, 315, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970844
Abstract: In this paper, we propose superposition coding based on the lattice Gaussian distribution to achieve strong secrecy over the Gaussian broadcast channel with one confidential message, with a constant gap to the secrecy capacity (only for the confidential message). The proposed superposition lattice code consists of a lattice Gaussian code for the Gaussian noise and a wiretap lattice code with strong secrecy. The flatness factor is used to analyze the error probability, information leakage and achievable rates. By removing the secrecy coding, we can modify our scheme to achieve the capacity of the Gaussian broadcast channel with one common and one private message without the secrecy constraint.
Keywords: Gaussian channels; broadcast channels; channel coding; error statistics; lattice theory; telecommunication security; Gaussian broadcast channel; Gaussian noise; achievable rates; confidential message; constant gap; error probability analysis; flatness factor; information leakage; lattice Gaussian code; lattice Gaussian distribution; secrecy capacity; superposition lattice coding; wiretap lattice code; Decoding; Encoding; Error probability; Gaussian distribution; Lattices; Noise; Vectors (ID#: 15-4901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970844&isnumber=6970773

 

Fan Cheng, "Optimality of Routing on the Wiretap Network with Simple Network Topology," Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 786, 790, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6874940
Abstract: In this paper, we study the performance of routing in the Level-I/II (n1, n2) wiretap networks, consisting of a source node, a destination node, and an intermediate node. The intermediate node connects the source and the destination nodes via a set of noiseless parallel channels, with sizes n1 and n2, respectively. The information in the network may be eavesdropped by a wiretapper, who can access at most one set of channels, called a wiretap set. All the possible wiretap sets which may be accessed by the wiretapper form a wiretap pattern. A random key K is used to protect the message M. We define two decoding levels: in Level-I, only M is decoded and in Level-II, both M and K are decoded. The objective is to minimize H(K)/H(M) under perfect secrecy constraint. Our concern is whether routing is optimal in this simple network. By harnessing the power of Shannon-type inequalities, we enumerate all the wiretap patterns in the Level-I/II (3, 3) networks, and find out that gaps exist between the bounds by routing and the bounds by Shannon-type inequalities for a small fraction of all the wiretap patterns. Furthermore, we show that for some wiretap patterns, the Shannon bounds can be achieved by a linear code; i.e, routing is not optimal even in the (3, 3) case. Some subtle issues on the network models are discussed and interesting open problems are introduced.
Keywords: linear codes; network coding; telecommunication network routing; telecommunication network topology; telecommunication security; Shannon-type inequalities; destination node; eavesdropped; intermediate node; linear code; network topology; noiseless parallel channels; source node; wiretap network; wiretap pattern; wiretap set; wiretapper; Channel coding; Decoding; Network coding; Random variables; Routing (ID#: 15-4902)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874940&isnumber=6874773

 

Carlet, C.; Freibert, F.; Guilley, S.; Kiermaier, M.; Jon-Lark Kim; Solé, P., "Higher-Order CIS Codes," Information Theory, IEEE Transactions on, vol. 60, no. 9, pp. 5283, 5295, Sept. 2014. doi:10.1109/TIT.2014.2332468
Abstract: We introduce complementary information set codes of higher order. A binary linear code of length tk and dimension k is called a complementary information set code of order t (t-CIS code for short) if it has t pairwise disjoint information sets. The duals of such codes permit to reduce the cost of masking cryptographic algorithms against side-channel attacks. As in the case of codes for error correction, given the length and the dimension of a t-CIS code, we look for the highest possible minimum distance. In this paper, this new class of codes is investigated. The existence of good long CIS codes of order 3 is derived by a counting argument. General constructions based on cyclic and quasi-cyclic codes and on the building up construction are given. A formula similar to a mass formula is given. A classification of 3-CIS codes of length ≤ 12 is given. Nonlinear codes better than linear codes are derived by taking binary images of Z4-codes. A general algorithm based on Edmonds' basis packing algorithm from matroid theory is developed with the following property: given a binary linear code of rate 1/t, it either provides t disjoint information sets or proves that the code is not t-CIS. Using this algorithm, all optimal or best known [tk, k] codes, where t = 3, 4, . . . , 256 and 1≤ k ≤⌊256/t⌋ are shown to be t-CIS for all such k and t, except for t = 3 with k = 44 and t = 4 with k = 37.
Keywords: binary codes; cryptography; cyclic codes; error correction codes; higher order statistics; linear codes; matrix algebra; set theory; 3-CIS code classification; Edmonds basis packing algorithm; Z4-linear code; binary linear code; complementary information set; cost reduction; cryptographic algorithm; error correction codes; higher order CIS codes; masking scheme; matroid theory; pairwise disjoint information sets; quasi-cyclic codes; side channel attacks; Boolean functions; Educational institutions; Linear codes; Partitioning algorithms; Registers; Security; Silicon;( {mathbb Z}_{4}) -linear codes; Boolean functions; Dual distance; quasi-cyclic codes (ID#: 15-4903)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842653&isnumber=6878505
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Computing Theory and Composability, 2014

 

 
SoS Logo

Computing Theory and Composability

2014


The sole research article that combined computing theory with composability was presented in April 2014 at the Symposium on Agent Directed Simulation. 


Mingxin Zhang, Alexander Verbraeck. “A Composable PRS-Based Agent Meta-Model for Multi-Agent Simulation Using the DEVS Framework.” ADS '14 Proceedings of the 2014 Symposium on Agent Directed Simulation, April 2014, Article No. 1., 8 pages. doi: (not provided)
Abstract: This paper presents a composable cognitive agent meta-model for multi-agent simulation based on the DEVS (Discrete Event System Specification) framework. We describe an attempt to compose a PRS-based cognitive agent by merely combining "plug and play" DEVS components, show how this DEVS-based cognitive agent meta-model is extensible to serve as a higher-level component for M&S of multi-agent systems, and how the agent meta-model components are reusable to ease cognitive agent modelling development. In addition to an overview of our agent meta-model, we also describe the components of the model specification and services in detail. To test the feasibility of our design, we constructed a simulation based on a Rock-Paper-Scissors game scenario. We also give out comparisons between this agent meta-model and other cognitive agent models. Our agent meta-model is novel in terms of both agent and agent components as these are all abstracted using the DEVS formalism. As different implementations of agent model components are based on the same meta-model components, all the developed agent model components can be reused in the development of other agent models which increases the composability of the agent model, and the whole cognitive agent model can be considered as a coupled model in the DEVS model hierarchy which supports multi-hierarchy modelling.
Keywords: DEVS, PRS, agent model, cognitive architecture, composability (ID#: 15-5831)
URLhttp://dl.acm.org/citation.cfm?id=2665049.2665050


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Computing Theory and Security Metrics, 2014

 

 
SoS Logo

Computing Theory and Security Metrics, 2014


The works cited here combine research into computing theory with research into security metrics.  All were presented in 2014.


George Cybenko, Jeff Hughes. “No Free Lunch in Cyber Security.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, vol. no., pp.1, 12. doi:10.1145/2663474.2663475
Abstract: Confidentiality, integrity and availability (CIA) are traditionally considered to be the three core goals of cyber security. By developing probabilistic models of these security goals we show that:
•    the CIA goals are actually specific operating points in a continuum of possible mission security requirements;
•    component diversity, including certain types of Moving Target Defenses, versus component hardening as security strategies
     can be quantitatively evaluated;
•    approaches for diversity can be formalized into a rigorous taxonomy.
Such considerations are particularly relevant for so-called Moving Target Defense (MTD approaches that seek to adapt or randomize computer resources in a way to delay or defeat attackers). In particular, we explore tradeoffs between confidentiality and availability in such systems that suggest improvements.
Keywords: availability; confidentiality; diversity; formal models; integrity; moving targets; security metrics (ID#: 15-5796)
URL:  http://doi.acm.org/10.1145/2663474.2663475

 

Benjamin D. Rodes, John C. Knight, Kimberly S. Wasson. “A Security Metric Based on Security Arguments.” WETSoM 2014 Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics, June 2014, vol, no., pp. 66, 72. doi:10.1145/2593868.2593880
Abstract: Software security metrics that facilitate decision making at the enterprise design and operations levels are a topic of active research and debate. These metrics are desirable to support deployment decisions, upgrade decisions, and so on; however, no single metric or set of metrics is known to provide universally effective and appropriate measurements. Instead, engineers must choose, for each software system, what to measure, how and how much to measure, and must be able to justify the rationale for how these measurements are mapped to stakeholder security goals. An assurance argument for security (i.e., a security argument) provides comprehensive documentation of all evidence and rationales for justifying belief in a security claim about a software system. In this work, we motivate the need for security arguments to facilitate meaningful and comprehensive security metrics, and present a novel framework for assessing security arguments to generate and interpret security metrics.
Keywords: assurance case; confidence; security metrics (ID#: 15-5797)
URL: http://doi.acm.org/10.1145/2593868.2593880

 

Gaofeng Da, Maochao Xu, Shouhuai Xu. “A New Approach to Modeling and Analyzing Security of Networked Systems.” HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 6. doi:10.1145/2600176.2600184
Abstract: Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.
Keywords: security analysis; security metrics; security modeling (ID#: 15-5798)
URL:  http://doi.acm.org/10.1145/2600176.2600184

 

Steven Noel, Sushil Jajodia. “Metrics Suite for Network Attack Graph Analytics.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, vol., no., pp. 5, 8. doi:10.1145/2602087.2602117
Abstract: We describe a suite of metrics for measuring network-wide cyber security risk based on a model of multi-step attack vulnerability (attack graphs). Our metrics are grouped into families, with family-level metrics combined into an overall metric for network vulnerability risk. The Victimization family measures risk in terms of key attributes of risk across all known network vulnerabilities. The Size family is an indication of the relative size of the attack graph. The Containment family measures risk in terms of minimizing vulnerability exposure across protection boundaries. The Topology family measures risk through graph theoretic properties (connectivity, cycles, and depth) of the attack graph. We display these metrics (at the individual, family, and overall levels) in interactive visualizations, showing multiple metrics trends over time.
Keywords: attack graphs; security metrics; topological vulnerability analysis (ID#: 15-5799)
URL:   http://doi.acm.org/10.1145/2602087.2602117

 

Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., “OutMet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis,” Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-5800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725

 

Desouky, A.F.; Beard, M.D.; Etzkorn, L.H., “A Qualitative Analysis of Code Clones and Object Oriented Runtime Complexity Based on Method Access Points,” Convergence of Technology (I2CT), 2014 International Conference for, vol., no., pp. 1, 5, 6-8 April 2014. doi:10.1109/I2CT.2014.7092292
Abstract: In this paper, we present a new object oriented complexity metric based on runtime method access points. Software engineering metrics have traditionally indicated the level of quality present in a software system. However, the analysis and measurement of quality has long been captured at compile time, rendering useful results, although potentially incomplete, since all source code is considered in metric computation, versus the subset of code that actually executes. In this study, we examine the runtime behavior of our proposed metric on an open source software package, Rhino 1.7R4. We compute and validate our metric by correlating it with code clones and bug data. Code clones are considered to make software more complex and harder to maintain. When cloned, a code fragment with an error quickly transforms into two (or more) errors, both of which can affect the software system in unique ways. Thus a larger number of code clones is generally considered to indicate poorer software quality. For this reason, we consider that clones function as an external quality factor, in addition to bugs, for metric validation.
Keywords: object-oriented programming; program verification; public domain software; security of data; software metrics; software quality; source code (software); Rhino 1.7R4; bug data; code clones; metric computation; metric validation; object oriented runtime complexity; open source software package; qualitative analysis; runtime method access points; software engineering metrics; software quality; source code; Cloning; Complexity theory; Computer bugs; Correlation; Measurement; Runtime; Software; Code Clones; Complexity; Object Behavior; Object Oriented Runtime Metrics; Software Engineering (ID#: 15-5801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092292&isnumber=7092013

 

Bhuyan, M.H.; Bhattacharyya, D.K.; Kalita, J.K., “Information Metrics for Low-Rate DDoS Attack Detection: A Comparative Evaluation,” Contemporary Computing (IC3), 2014 Seventh International Conference on, vol., no., pp. 80, 84, 7-9 Aug. 2014. doi:10.1109/IC3.2014.6897151
Abstract: Invasion by Distributed Denial of Service (DDoS) is a serious threat to services offered on the Internet. A low-rate DDoS attack allows legitimate network traffic to pass and consumes low bandwidth. So, detection of this type of attacks is very difficult in high speed networks. Information theory is popular because it allows quantifications of the difference between malicious traffic and legitimate traffic based on probability distributions. In this paper, we empirically evaluate several information metrics, namely, Hartley entropy, Shannon entropy, Renyi's entropy and Generalized entropy in their ability to detect low-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic and an appropriate metric facilitates building an effective model to detect low-rate DDoS attacks. We use MIT Lincoln Laboratory and CAIDA DDoS datasets to illustrate the efficiency and effectiveness of each metric for detecting mainly low-rate DDoS attacks.
Keywords: Internet; computer network security; entropy; statistical distributions; CAIDA DDoS dataset; Hartley entropy; Internet; MIT Lincoln Laboratory dataset; Renyi entropy; Shannon entropy; distributed denial-of-service; generalized entropy; information metrics; information theory; low-rate DDoS attack detection; network traffic; probability distributions; Computer crime; Entropy; Floods; Information entropy; Measurement; Probability distribution; Telecommunication traffic; DDoS attack; entropy; information metric; low-rate; network traffic (ID#: 15-5802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897151&isnumber=6897132

 

Bidi Ying; Makrakis, D., “Protecting Location Privacy with Clustering Anonymization in Vehicular Networks,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 305, 310, April 27 2014 - May 2 2014. doi:10.1109/INFCOMW.2014.6849249
Abstract: Location privacy is an important issue in location-based services. A large number of location cloaking algorithms have been proposed for protecting location privacy of users. However, these algorithms cannot be used in vehicular networks due to constrained vehicular mobility. In this paper, we propose a new method named Protecting Location Privacy with Clustering Anonymization (PLPCA) for location-based services in vehicular networks. This PLPCA algorithm starts with a road network transforming to an edge-cluster graph in order to conceal road information and traffic information, and then provides a cloaking algorithm based on A-anonymity and l-diversity as privacy metrics to further enclose a target vehicle's location. Simulation analysis shows our PLPCA has good performances like the strength of hiding of road information & traffic information.
Keywords: data privacy; graph theory; mobility management (mobile radio); pattern clustering; telecommunication security; vehicular ad hoc networks; PLPCA algorithm; edge-cluster graph; k-anonymity; l-diversity; location based service; location cloaking algorithm; protecting location privacy with clustering anonymization; road information hiding; road network transforming; traffic information hiding; vehicular ad hoc network; vehicular mobility; Clustering algorithms; Conferences; Privacy; Roads; Social network services; Vehicle dynamics; Vehicles; cluster; location privacy; location-based services; vehicular networks (ID#: 15-5803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849249&isnumber=6849127

 

Ateser, M.; Tanriover, O., “Investigation of the Cobit Framework's Inputoutput Relationships by Using Graph Metrics,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp.1269, 1275, 7-10 Sept. 2014. doi:10.15439/2014F178
Abstract: The information technology (IT) governance initiatives are complex, time consuming and resource intensive. COBIT, (Control Objectives for Information Related Technology), provides an IT governance framework and supporting toolset to help an organization ensure alignment between use of information technology and its business goals. This paper presents an investigation of COBIT processes' and inputs/outputs relationships with graph analysis. Examining the relationships provides a deep understanding of COBIT structure and may guide for IT governance implementation and audit plans and initiatives. Graph metrics are used to identify the most influential/sensitive processes and relative importance for a given context. Hence, the analysis presented provide guidance to decision makers while developing improvement programs, audits and possibly maturity assessments based on COBIT framework.
Keywords: DP management; business data processing; graph theory; COBIT framework inputs-outputs relationships; Control Objectives for Information Related Technology; IT governance framework; business goals; graph analysis; graph metrics; Guidelines; Information technology; Measurement; Monitoring; Organizations; Portfolios (ID#: 15-5804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933164&isnumber=6932982

 

Bou-Harb, E.; Debbabi, M.; Assi, C., “Behavioral Analytics for Inferring Large-Scale Orchestrated Probing Events,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 506, 511, April 27 2014 – May 2 2014. doi:10.1109/INFCOMW.2014.6849283
Abstract: The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
Keywords: IP networks; Internet; computer network security; data mining; fuzzy set theory; information theory; invasive software; statistical analysis; telecommunication traffic; Internet traffic; coordination pattern; cyber attack; cyber threat intelligence; cyberspace; data mining methods; early cyber attack notification; early cyber attack warning; emergency response teams; fuzzy approaches; information theoretical metrics; large-scale orchestrated probing events; malicious activities; malicious real darknet data; malware traffic; network security analysts; orchestration pattern; routable unallocated IP addresses; signal techniques; statistical techniques; Conferences; IP networks; Internet; Malware; Probes (ID#: 15-5805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849283&isnumber=6849127

 

Keramati, M.; Keramati, M., “Novel Security Metrics for Ranking Vulnerabilities in Computer Networks,Telecommunications (IST), 2014 7th International Symposium on, vol., no., pp. 883, 888, 9-11 Sept. 2014. doi:10.1109/ISTEL.2014.7000828
Abstract: By daily increasing appearance of vulnerabilities and various ways of intruding networks, one of the most important fields in network security will be doing network hardening and this can be possible by patching the vulnerabilities. But this action for all vulnerabilities may cause high cost in the network and so, we should try to eliminate only most perilous vulnerabilities of the network. CVSS itself can score vulnerabilities based on amount of damage they incur in the network but the main problem with CVSS is that, it can only score individual vulnerabilities without considering its relationship with other vulnerabilities of the network. So, in order to help fill this gap, in this paper we have defined some Attack graph and CVSS-based security metrics that can help us to prioritize vulnerabilities in the network by measuring the probability of exploiting them and also the amount of damage they will impose on the network. Proposed security metrics are defined by considering interaction between all vulnerabilities of the network. So our method can rank vulnerabilities based on the network they exist in. Results of applying these security metrics on one well-known network example are also shown that can demonstrates effectiveness of our approach.
Keywords: computer network security; matrix algebra; probability; CVSS-based security metrics; common vulnerability scoring system; computer network; intruding network security; probability; ranking vulnerability; Availability; Communication networks; Complexity theory; Computer networks; Educational institutions; Measurement; Security; Attack Graph; CVSS; Exploit; Network hardening; Security Metric; Vulnerability (ID#: 15-5806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000828&isnumber=7000650


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Computing Theory and Security Resilience, 2014

 

 
SoS Logo

Computing Theory and Security Resilience

2014


The works cited here combine research into computing theory with research into security resilience. All were presented in 2014.


Praks, P.; Kopustinskas, V., “Monte-Carlo Based Reliability Modelling of a Gas Network Using Graph Theory Approach,” Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol, no., pp.380, 386, 8-12 Sept. 2014. doi:10.1109/ARES.2014.57
Abstract: The aim of the study is to develop a European gas transmission system probabilistic model to analyse in a single computer model, the reliability and capacity constraints of a gas transmission network. We describe our approach to modelling the reliability and capacity constraints of networks elements, for example gas storages and compressor stations by a multi-state system. The paper presents our experience with the computer implementation of a gas transmission network probabilistic prototype model based on generalization of the maximum flow problem for a stochastic-flow network in which elements can randomly fail with known failure probabilities. The paper includes a test-case benchmark study, which is based on a real gas transmission network. Monte-Carlo simulations are used for estimating the probability that less than the demanded volume of the commodity (for example, gas) is available in the selected network nodes. Simulated results are presented and analysed in depth by statistical methods.
Keywords: Monte Carlo methods; compressors; gas industry; graph theory; probability; reliability; stochastic processes; European gas transmission system probabilistic model ;Monte-Carlo based reliability modelling; Monte-Carlo simulations; capacity constraints; compressor stations; computer model; gas network; gas storages; gas transmission network probabilistic prototype model; graph theory approach; known failure probabilities; maximum flow problem; multistate system; network elements; network nodes; probability estimation; reliability constraints; statistical methods; stochastic-flow network; test-case benchmark study; Computational modeling; Computer network reliability; Liquefied natural gas; Monte Carlo methods; Pipelines; Probabilistic logic; Reliability; Monte-Carlo methods; gas transmission network modelling; network reliability; network resilience (ID#: 15-5807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980306&isnumber=6980232

 

T. Stepanova, D. Zegzhda. “Applying Large-scale Adaptive Graphs to Modeling Internet of Things Security.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 479. doi:10.1145/2659651.2659696
Abstract: Lots of upcoming IT trends are based on the concept of heterogeneous networks: Internet of Things is amongst them. Modern heterogeneous networks are characterized by hardly predictable behavior, hundreds of parameters of network nodes and connections and lack of single basis for development of control methods and algorithms. To overcome listed problems one need to implement topological modeling of dynamically changing structures. In this paper authors propose basic theoretical framework that will allow estimation of controllability, resiliency, scalability and other determinant parameters of complex heterogeneous networks.
Keywords: internet of things, large-scale adaptive graph, security modeling, sustainability (ID#: 15-5808)
URL:  http://doi.acm.org/10.1145/2659651.2659696

 

Xing Chen, Wei Yu, David Griffith, Nada Golmie, Guobin Xu. “On Cascading Failures and Countermeasures Based on Energy Storage in the Smart Grid.” RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, October 2014, Pages 291-296. doi:10.1145/2663761.2663770
Abstract: Recently, there have been growing concerns about electric power grid security and resilience. The performance of the power grid may suffer from component failures or targeted attacks. A sophisticated adversary may target critical components in the grid, leading to cascading failures and large blackouts. To this end, this paper begins with identifying the most critical components that lead to cascading failures in the grid and then presents a defensive mechanism using energy storage to defend against cascading failures. Based on the optimal power flow control on the standard IEEE power system test cases, we systematically assess component significance, simulate attacks against power grid components, and evaluate the consequences. We also conduct extensive simulations to investigate the effectiveness of deploying Energy Storage Systems (ESSs), in terms of storage capacity and deployment locations, to mitigate cascading failures. Through extensive simulations, our data shows that integrating energy storage systems into the smart grid can efficiently mitigate cascading failures.
Keywords: cascading failure, cascading mitigation, energy storage, smart grid (ID#: 15-5809)
URL: http://doi.acm.org/10.1145/2663761.2663770

 

Gokce Gorbil, Omer H. Abdelrahman, Erol Gelenbe. “Storms in Mobile Networks.” Q2SWinet '14 Proceedings of the 10th ACM Symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 119-126. doi:10.1145/2642687.2642688
Abstract: Mobile networks are vulnerable to signalling attacks and storms caused by traffic that overloads the control plane through excessive signalling, which can be introduced via malware and mobile botnets. With the advent of machine-to-machine (M2M) communications over mobile networks, the potential for signalling storms increases due to the normally periodic nature of M2M traffic and the sheer number of communicating nodes. Several mobile network operators have also experienced signalling storms due to poorly designed applications that result in service outage. The radio resource control (RRC) protocol is particularly susceptible to such attacks, motivating this work within the EU FP7 NEMESYS project which presents simulations that clarify the temporal dynamics of user behavior and signalling, allowing us to suggest how such attacks can be detected and mitigated.
Keywords: 3G to 5G, malware, network attacks, network simulation, performance analysis, signalling storms, umts networks (ID#: 15-5810)
URL: http://doi.acm.org/10.1145/2642687.2642688

 

Lina Perelman, Saurabh Amin. “A Network Interdiction Model for Analyzing the Vulnerability of Water Distribution Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 135-144. doi:10.1145/2566468.2566480
Abstract: This article presents a network interdiction model to assess the vulnerabilities of a class of physical flow networks. A flow network is modeled by a potential function defined over the nodes and a flow function defined over arcs (links). In particular, the difference in potential function between two nodes is characterized by a nonlinear flux function of the flow on link between the two nodes. To assess the vulnerability of the network to adversarial attack, the problem is formulated as an attacker-defender network interdiction model. The attacker's objective is to interdict the most valuable links of the network given his resource constraints. The defender's objective is to minimize power loss and the unmet demand in the network. A bi-level approach is explored to identify most critical links for network interdiction. The applicability of the proposed approach is demonstrated on a reference water distribution network, and its utility toward developing mitigation plans is discussed.
Keywords: cyber-physical systems, network flow analysis, network interdiction, vulnerability assessment, water distribution systems (ID#: 15-5811)
URL: http://doi.acm.org/10.1145/2566468.2566480

 

Radoslav Ivanov, Miroslav Pajic, Insup Lee. “Resilient Multidimensional Sensor Fusion Using Measurement History.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 1-10. doi:10.1145/2566468.2566475
Abstract: This work considers the problem of performing resilient sensor fusion using past sensor measurements. In particular, we consider a system with n sensors measuring the same physical variable where some sensors might be attacked or faulty. We consider a setup in which each sensor provides the controller with a set of possible values for the true value. Here, more precise sensors provide smaller sets. Since a lot of modern sensors provide multidimensional measurements (e.g. position in three dimensions), the sets considered in this work are multidimensional polyhedra. Given the assumption that some sensors can be attacked or faulty, the paper provides a sensor fusion algorithm that obtains a fusion polyhedron which is guaranteed to contain the true value and is minimal in size. A bound on the volume of the fusion polyhedron is also proved based on the number of faulty or attacked sensors. In addition, we incorporate system dynamics in order to utilize past measurements and further reduce the size of the fusion polyhedron. We describe several ways of mapping previous measurements to current time and compare them, under different assumptions, using the volume of the fusion polyhedron. Finally, we illustrate the implementation of the best of these methods and show its effectiveness using a case study with sensor values from a real robot.
Keywords: cps security, fault-tolerance, fault-tolerant algorithms, sensor fusion (ID#: 15-5812)
URL: http://doi.acm.org/10.1145/2566468.2566475

 

Marina Krotofil, Alvaro A. Cárdenas, Bradley Manning, Jason Larsen. “CPS: Driving Cyber-Physical Systems to Unsafe Operating Conditions by Timing DoS Attacks on Sensor Signals.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 146-155. doi:10.1145/2664243.2664290
Abstract: DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state.  Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.
Keywords: CUSUM, DoS attacks, Tennessee eastman process, cyber-physical systems, optimal stopping problems (ID#: 15-5813)
URL: http://doi.acm.org/10.1145/2664243.2664290

 

Ran Gelles, Amit Sahai, Akshay Wadia. “Private Interactive Communication Across an Adversarial Channel.” ITCS '14 Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, January 2014, Pages 135-144. doi:10.1145/2554797.2554812
Abstract: Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors.  We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors. The same impossibility holds also for sub-constant noise rate, e.g., when c is exponentially small (as a function of the input size).
Keywords: adversarial noise, coding, information-theoretic security, interactive communication, private function evaluation (ID#: 15-5814)
URL:  http://doi.acm.org/10.1145/2554797.2554812

 

Saleh Soltan, Dorian Mazauric, Gil Zussman. “Cascading Failures in Power Grids: Analysis and Algorithms.” e-Energy '14 Proceedings of the 5th International Conference on Future Energy Systems, June 2014, Pages 195-206.  doi:10.1145/2602044.2602066
Abstract: This paper focuses on cascading line failures in the transmission system of the power grid. Recent large-scale power outages demonstrated the limitations of percolation- and epidemic-based tools in modeling cascades. Hence, we study cascades by using computational tools and a linearized power flow model. We first obtain results regarding the Moore-Penrose pseudo-inverse of the power grid admittance matrix. Based on these results, we study the impact of a single line failure on the flows on other lines. We also illustrate via simulation the impact of the distance and resistance distance on the flow increase following a failure, and discuss the difference from the epidemic models. We use the pseudo-inverse of admittance matrix to develop an efficient algorithm to identify the cascading failure evolution, which can be a building block for cascade mitigation. Finally, we show that finding the set of lines whose removal results in the minimum yield (the fraction of demand satisfied after the cascade) is NP-Hard and introduce a simple heuristic for finding such a set. Overall, the results demonstrate that using the resistance distance and the pseudo-inverse of admittance matrix provides important insights and can support the development of efficient algorithms.
Keywords: algorithms, cascading failures, power grid, pseudo-inverse (ID#: 15-5815)
URL: http://doi.acm.org/10.1145/2602044.2602066

 

Mahdi Zamani, Mahnush Movahedi. “Secure Location Sharing.”  FOMC '14, Proceedings of the 10th ACM International Workshop on Foundations of Mobile Computing, August 2014, Pages 1-10. doi:10.1145/2634274.2634281
Abstract: In the last decade, the number of location-aware mobile devices has mushroomed. Just as location-based services grow fast, they lay out many questions and challenges when it comes to privacy. For example, who owns the location data and for what purpose is the data used? To answer these questions, we need new tools for location privacy. In this paper, we focus on the problem of secure location sharing, where a group of n clients want to collaborate with each other to anonymously share their location data with a location database server and execute queries based on them. To become more realistic, we assume up to a certain fraction of the clients are controlled arbitrarily by an active and computationally unbounded adversary. A relaxed version of this problem has already been studied in the literature assuming either a trusted third party or a weaker adversarial model. We alternatively propose a scalable fully-decentralized protocol for secure location sharing that tolerates up to n/6 statically-chosen malicious clients and does not require any trusted third party. We show that, unlike most other location-based services, our protocol is secure against traffic-analysis attacks. We also show that our protocol requires each client to send a polylogarithmic number of bits and compute a polylogarithmic number of operations (with respect to n) to query a point of interest based on its location.
Keywords: distributed algorithms, fault-tolerance, location-based services (ID#: 15-5816)
URL:   http://doi.acm.org/10.1145/2634274.2634281

 

Zain Shamsi, Ankur Nandwani, Derek Leonard, Dmitri Loguinov. “Hershel: Single-Packet OS Fingerprinting.” ACM SIGMETRICS Performance Evaluation Review - Performance Evaluation Review, Volume 42, Issue 1, June 2014, Pages 195-206. doi:10.1145/2637364.2591972
Abstract: Traditional TCP/IP fingerprinting tools (e.g., nmap) are poorly suited for Internet-wide use due to the large amount of traffic and intrusive nature of the probes. This can be overcome by approaches that rely on a single SYN packet to elicit a vector of features from the remote server; however, these methods face difficult classification problems due to the high volatility of the features and severely limited amounts of information contained therein. Since these techniques have not been studied before, we first pioneer stochastic theory of single-packet OS fingerprinting, build a database of 116 OSes, design a classifier based on our models, evaluate its accuracy in simulations, and then perform OS classification of 37.8M hosts from an Internet-wide scan.
Keywords: internet measurement, os classification, os fingerprinting (ID#: 15-5817)
URL: http://doi.acm.org/10.1145/2637364.2591972

 

Heath J. LeBlanc, Firas Hassan. “Resilient Distributed Parameter Estimation in Heterogeneous Time-Varying Networks.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 19-28. doi:10.1145/2566468.2566476
Abstract: In this paper, we study a lightweight algorithm for distributed parameter estimation in a heterogeneous network in the presence of adversary nodes. All nodes interact under a local broadcast model of communication in a time-varying network comprised of many inexpensive normal nodes, along with several more expensive, reliable nodes. Either the normal or reliable nodes may be tampered with and overtaken by an adversary, thus becoming an adversary node. The reliable nodes have an accurate estimate of their true parameters, whereas the inexpensive normal nodes communicate and take difference measurements with neighbors in the network in order to better estimate their parameters. The normal nodes are unsure, a priori, about which of their neighbors are normal, reliable, or adversary nodes. However, by sharing information on their local estimates with neighbors, we prove that the resilient iterative distributed estimation (RIDE) algorithm, which utilizes redundancy by removing extreme information, is able to drive the local estimates to their true parameters as long as each normal node is able to interact with a sufficient number of reliable nodes often enough and is not directly influenced by too many adversary nodes.
Keywords: adversary, clock synchronization, distributed algorithm, distributed parameter estimation, localization, resilient systems (ID#: 15-5818)
URL: http://doi.acm.org/10.1145/2566468.2566476

 

Benoît Libert, Marc Joye, Moti Yung. “Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 303-312. doi:10.1145/2611462.2611498
Abstract: Threshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into n shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1 ≤ n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to t corrupted servers; i.e., it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical fully distributed (the private key is born distributed), non-interactive schemes — where the servers can compute their partial signatures without communication with other servers — with adaptive security (i.e., the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1), where certain solutions incur O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems).  To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born — although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player).
Keywords: adaptive security, availability, distributed key generation, efficiency, erasure-free schemes, fault tolerance, fully distributed systems, non-interactivity, threshold signature schemes (ID#: 15-5819)
URL: http://doi.acm.org/10.1145/2611462.2611498

 

Nathaniel Husted, Steven Myers. “Emergent Properties & Security: The Complexity of Security as a Science.” NSPW '14 Proceedings of the 2014 workshop on New Security Paradigms Workshop, September 2014, Pages 1-14. doi:10.1145/2683467.2683468
Abstract: The notion of emergent properties is becoming common place in the physical and social sciences, with applications in physics, chemistry, biology, medicine, economics, and sociology. Unfortunately, little attention has been given to the discussion of emergence in the realm of computer security, from either the attack or defense perspectives, despite there being examples of such attacks and defenses. We review the concept of emergence, discuss it in the context of computer security, argue that understanding such concepts is essential for securing our current and future systems, give examples of current attacks and defenses that make use of such concepts, and discuss the tools currently available to understand this field. We conclude by arguing that more focus needs to be given to the emergent perspective in information security, especially as we move forward to the Internet of Things and a world full of cyber-physical systems, as we believe many future attacks will make use of such ideas and defenses will require such insights.
Keywords: complex systems, information security, ubiquitous computing (ID#: 15-5820)
URL: http://doi.acm.org/10.1145/2683467.2683468

 

Minzhe Guo, Prabir Bhattacharya. “Diverse Virtual Replicas for Improving Intrusion Tolerance in Cloud.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 41-44. doi:10.1145/2602087.2602116
Abstract: Intrusion tolerance is important for services in cloud to continue functioning while under attack. Byzantine fault-tolerant replication is considered a fundamental component of intrusion tolerant systems. However, the monoculture of replicas can render the theoretical properties of Byzantine fault-tolerant system ineffective, even when proactive recovery techniques are employed. This paper exploits the design diversity available from off-the-shelf operating system products and studies how to diversify the configurations of virtual replicas for improving the resilience of the service in the presence of attacks. A game-theoretic model is proposed for studying the optimal diversification strategy for the system defender and an efficient algorithm is designed to approximate the optimal defense strategies in large games.
Keywords: diversity, intrusion tolerance, virtual replica (ID#: 15-5821)
URL: http://doi.acm.org/10.1145/2602087.2602116

 

Stjepan Picek, Bariş Ege, Lejla Batina, Domagoj Jakobovic, Łukasz Chmielewski, Marin Golub. “On Using Genetic Algorithms for Intrinsic Side-Channel Resistance: The Case of AES S-Box.” CS2 '14 Proceedings of the First Workshop on Cryptography and Security in Computing Systems, January 2014, Pages 13-18. doi:10.1145/2556315.2556319
Abstract: Finding balanced S-boxes with high nonlinearity and low transparency order is a difficult problem. The property of transparency order is important since it specifies the resilience of an S-box against differential power analysis. Better values for transparency order and hence improved side-channel security often imply less in terms of nonlinearity. Therefore, it is impossible to find an S-box with all optimal values. Currently, there are no algebraic procedures that can give the preferred and complete set of properties for an S-box. In this paper, we employ evolutionary algorithms to find S-boxes with desired cryptographic properties. Specifically, we conduct experiments for the 8×8 S-box case as used in the AES standard. The results of our experiments proved the feasibility of finding S-boxes with the desired properties in the case of AES. In addition, we show preliminary results of side-channel experiments on different versions of "improved" S-boxes.
Keywords: S-box, block ciphers, genetic algorithms, side-channel analysis, transparency order (ID#: 15-5822)
URL: http://doi.acm.org/10.1145/2556315.2556319

 

Tua A. Tamba, M. D. Lemmon. “Forecasting the Resilience of Networked Dynamical Systems under Environmental Perturbation.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 61-62. doi:10.1145/2566468.2576848
Abstract: (not provided)
Keywords: distance-to-bifurcation, resilience, sum-of-square relaxation (ID#: 15-5823)
URL: http://doi.acm.org/10.1145/2566468.2576848

 

Marica Amadeo, Claudia Campolo, Antonella Molinaro. “Multi-source Data Retrieval in IoT via Named Data Networking.” ICN '14 Proceedings of the 1st International Conference on Information-Centric Networking, September 2014, Pages 67-76. doi:10.1145/2660129.2660148
Abstract: The new era of Internet of Things (IoT) is driving the revolution in computing and communication technologies spanning every aspect of our lives. Thanks to its innovative concepts, such as named content, name-based routing and in-network caching, Named Data Networking (NDN) appears as a key enabling paradigm for IoT. Despite its potential, the support of IoT applications often requires some modifications in the NDN engine for a more efficient and effective exchange of packets. In this paper, we propose a baseline NDN framework for the support of reliable retrieval of data from different wireless producers which can answer to the same Interest packet (e.g., a monitoring application collecting environmental data from sensors in a target area). The solution is evaluated through simulations in ndnSIM and achieved results show that, by leveraging the concept of exclude field and ad hoc defined schemes for Data suppression and collision avoidance, it leads to improved performance in terms of data collection time and network overhead.
Keywords: data retrieval, internet of things, named data networking, naming, transport (ID#: 15-5824)
URL: http://doi.acm.org/10.1145/2660129.2660148

 

Yibo Zhu, Xia Zhou, Zengbin Zhang, Lin Zhou, Amin Vahdat, Ben Y. Zhao, Haitao Zheng. “Cutting the Cord: A Robust Wireless Facilities Network for Data Centers.” MobiCom '14 Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, September 2014, Pages 581-592. doi:10.1145/2639108.2639140
Abstract: Today's network control and management traffic are limited by their reliance on existing data networks. Fate sharing in this context is highly undesirable, since control traffic has very different availability and traffic delivery requirements. In this paper, we explore the feasibility of building a dedicated wireless facilities network for data centers. We propose Angora, a low-latency facilities network using low-cost, 60GHz beamforming radios that provides robust paths decoupled from the wired network, and flexibility to adapt to workloads and network dynamics. We describe our solutions to address challenges in link coordination, link interference and network failures. Our testbed measurements and simulation results show that Angora enables large number of low-latency control paths to run concurrently, while providing low latency end-to-end message delivery with high tolerance for radio and rack failures.
Keywords: 60ghz wireless, data centers, wireless beamforming (ID#: 15-5825)
URL: http://doi.acm.org/10.1145/2639108.2639140

 

Anupam Das, Nikita Borisov, Prateek Mittal, Matthew Caesar. “Re3: Relay Reliability Reputation for Anonymity Systems.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 63-74. doi:10.1145/2590296.2590338
Abstract: To conceal user identities, Tor, a popular anonymity system, forwards traffic through multiple relays. These relays, however, are often unreliable, leading to a degraded user experience. Worse yet, malicious relays may strategically introduce deliberate failures to increase their chance of compromising anonymity. In this paper we propose a reputation system that profiles the reliability of relays in an anonymity system based on users' past experience. A particular challenge is that an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. This enables an attack where malicious relays can target a set of honest relays in order to drive down their reputation. Our system defends against this attack in two ways. Firstly, we use an adaptive exponentially-weighted moving average (EWMA) that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time. Secondly, we propose a filtering scheme based on the evaluated reputation score that can effectively discard relays involved in such attacks. We use probabilistic analysis, simulations, and real-world experiments to validate our reputation system. We show that the dominant strategy for an attacker is to not perform deliberate failures, but rather maintain a high quality of service. Our reputation system also significantly improves the reliability of path construction even in the absence of attacks. Finally, we show that the benefits of our reputation system can be realized with a moderate number of observations, making it feasible for individual clients to perform their own profiling, rather than relying on an external entity.
Keywords: DOS attack, anonymity, reputation systems, tor network (ID#: 15-5826)
URL:  http://doi.acm.org/10.1145/2590296.2590338

 

Paulo Casanova, David Garlan, Bradley Schmerl, Rui Abreu. “Diagnosing Unobserved Components in Self-Adaptive Systems.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 75-84. doi:10.1145/2593929.2593946
Abstract: Availability is an increasingly important quality for today's software-based systems and it has been successfully addressed by the use of closed-loop control systems in self-adaptive systems. Probes are inserted into a running system to obtain information and the information is fed to a controller that, through provided interfaces, acts on the system to alter its behavior. When a failure is detected, pinpointing the source of the failure is a critical step for a repair action. However, information obtained from a running system is commonly incomplete due to probing costs or unavailability of probes. In this paper we address the problem of fault localization in the presence of incomplete system monitoring. We may not be able to directly observe a component but we may be able to infer its health state. We provide formal criteria to determine when health states of unobservable components can be inferred and establish formal theoretical bounds for accuracy when using any spectrum-based fault localization algorithm.
Keywords: Diagnostics, Monitoring, Self-adaptive systems (ID#: 15-5827)
URL:  http://doi.acm.org/10.1145/2593929.2593946

 

Michael Backes, Fabian Bendun, Ashish Choudhury, Aniket Kate. “Asynchronous MPC with a Strict Honest Majority Using Non-Equivocation.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 10-19.  doi:10.1145/2611462.2611490
Abstract: Multiparty computation (MPC) among n parties can tolerate up to t < n/2 active corruptions in a synchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t < n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation. Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allow a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t < n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t < n/2 by Beerliová-Trubíniová, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of Θ(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliová-Trubíniová et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t < n/2, which is of independent interest to threshold cryptography.
Keywords: asynchrony, multiparty computation (MPC), non-equivocation, reduced assumptions, resiliency, verifiable secret sharing (VSS) (ID#: 15-5828)
URL: http://doi.acm.org/10.1145/2611462.2611490

 

Lannan Luo, Jiang Ming, Dinghao Wu, Peng Liu, Sencun Zhu. “Semantics-Based Obfuscation-Resilient Binary Code Similarity Comparison with Applications to Software Plagiarism Detection.” FSE 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 389-400. doi:10.1145/2635868.2635900
Abstract: Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.
Keywords: Software plagiarism detection, binary code similarity comparison, obfuscation, symbolic execution, theorem proving (ID#: 15-5829)
URL: http://doi.acm.org/10.1145/2635868.2635900

 

Divesh Aggarwal, Yevgeniy Dodis, Shachar Lovett. “Non-malleable Codes from Additive Combinatorics.” STOC '14, Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 774-783. doi:10.1145/2591796.2591804
Abstract: Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of "tampering functions" F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R individually. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model.  Prior to this work, non-malleable codes in the splitstate model received considerable attention in the literature, but were constructed either (1) in the random oracle model [16], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [26], or (3) could only encode 1-bit messages [14]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model. The heart of our construction uses the following new property of the inner-product function ⟨L;R⟩ over the vector space Fnp (for a prime p and large enough dimension n): if L and R are uniformly random over Fnp, and f,g : Fnp Fnp are two arbitrary functions on L and R, then the joint distribution (⟨L;R⟩, ⟨f(L), g(R)⟩) is "close" to the convex combination of "affine distributions" {(U, aU + b) | a,b Fp}, where U is uniformly random in Fp. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called Quasi-polynomial Freiman-Ruzsa Theorem which was recently established by Sanders [29] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture.
Keywords: (not provided) (ID#: 15-5830)
URL: http://doi.acm.org/10.1145/2591796.2591804


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Control Theory and Privacy, 2014, Part 1

 

 
SoS Logo

Control Theory and Privacy, 2014

Part 1


In the Science of Security, control theory offers methods and approaches to potentially solve hard problems. The research work presented here specifically addresses issues in privacy. The work was presented in 2014..


Cox, A.; Roy, S.; Warnick, S., “A Science of System Security,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 487, 492, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039428
Abstract: As the internet becomes the information-technology backbone for more and more operations, including critical infrastructures such as water and power systems, the security problems introduced by linking such operations to the internet become more of a concern. Various communities have considered these problems and approached solutions from a variety of perspectives. In this paper, we consider the contributions we believe control theory can make towards developing tools for analyzing whole system security, that is, security of a system that may include its physical and human elements as well as its cyber components. In particular, we contrast notions of security focused on protecting information, and thus concerned primarily with delivering the right information to the right people (and no one else), with a different perspective on system security focused on protecting system functionality, which is concerned primarily with system robustness to particular attacks (and may not be concerned with privacy of communications).
Keywords: security of data; Internet; control theory; information protection; information technology backbone; security notion; system functionality protection; system security; Communities; Computational modeling; Computer security; Computers; Robustness; US Government (ID#: 15-5739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039428&isnumber=7039338

 

Srivastava, M., “In Sensors We Trust — A Realistic Possibility?” Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, vol., no., pp. 1, 1, 26-28 May 2014. doi:10.1109/DCOSS.2014.65
Abstract: Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-5740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129

 

Nai-Wei Lo; Yohan, A., “Danger Theory-Based Privacy Protection Model for Social Networks,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp. 1397, 1406, 7-10 Sept. 2014. doi:10.15439/2014F129
Abstract: Privacy protection issues in Social Networking Sites (SNS) usually raise from insufficient user privacy control mechanisms offered by service providers, unauthorized usage of user's data by SNS, and lack of appropriate privacy protection schemes for user's data at the SNS servers. In this paper, we propose a privacy protection model based on danger theory concept to provide automatic detection and blocking of sensitive user information revealed in social communications. By utilizing the dynamic adaptability feature of danger theory, we show how a privacy protection model for SNS users can be built with system effectiveness and reasonable computing cost. A prototype based on the proposed model is constructed and evaluated. Our experiment results show that the proposed model achieves 88.9% detection and blocking rate in average for user-sensitive data revealed by the services of SNS.
Keywords: data privacy; social networking (online); SNS; danger theory; dynamic adaptability feature; privacy protection; social communication; social networking sites; user privacy control mechanism; Adaptation models; Cryptography; Data privacy; Databases; Immune system; Privacy; Social network services (ID#: 15-5741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933181&isnumber=6932982

 

Ward, J.R.; Younis, M., “Examining the Effect of Wireless Sensor Network Synchronization on Base Station Anonymity,” Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 204, 209, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.39
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).
Keywords: protocols; synchronisation; telecommunication network topology; telecommunication security; telecommunication traffic; wireless sensor networks; BS anonymity improvement; RBS; TPSN; WSN topology; base station anonymity; data sources; evidence theory analysis; network traffic flow; reference broadcast synchronization; security mechanisms; timing-synch protocol for sensor networks; traffic analysis techniques; wireless sensor network synchronization; Protocols; Receivers; Sensors; Synchronization; Wireless communication; Wireless sensor networks; RBS; TPSN; anonymity; location privacy; synchronization; wireless sensor network (ID#: 15-5742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956760&isnumber=6956719

 

Tsegaye, T.; Flowerday, S., “Controls for Protecting Critical Information Infrastructure from Cyberattacks,” Internet Security (WorldCIS), 2014 World Congress on, vol., no., pp. 24, 29, 8-10 Dec. 2014. doi:10.1109/WorldCIS.2014.7028160
Abstract: Critical information infrastructure has enabled organisations to store large amounts of information on their systems and deliver it via networks such as the internet. Users who are connected to the internet are able to access various internet services provided by critical information infrastructure. However, some organisations have not effectively secured their critical information infrastructure and hackers, disgruntled employees and other entities have taken advantage of this by launching cyberattacks on their critical information infrastructure. They do this by using cyberthreats to exploit vulnerabilities in critical information infrastructure which organisations fail to secure. As a result, cyberthreats are able to steal or damage confidential information stored on systems or take down websites, preventing access to information. Despite this, risk strategies can be used to implement a number of security controls: preventive, detective and corrective controls, which together form a system of controls. This will ensure that the confidentiality, integrity and availability of information is preserved, thus reducing risks to information. This system of controls is based on the General Systems Theory, which states that the elements of a system are interdependent and contribute to the operation of the whole system. Finally, a model is proposed to address insecure critical information infrastructure.
Keywords: Internet; business data processing; computer crime; data integrity; data privacy; risk management; Internet service access; confidential information stealing; corrective control; critical information infrastructure protection; cyberattacks; cyberthreats; detective control; disgruntled employees; general systems theory; hackers; information access; information availability; information confidentiality; information integrity; organisational information; preventive control; risk reduction; security controls; vulnerability exploitation; Availability; Computer crime; Malware; Personnel; Planning; Critical Information Infrastructure; Cyberattacks; Cyberthreats; Security Controls; Vulnerabilities (ID#: 15-5743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028160&isnumber=7027983

 

Hsu, J.; Gaboardi, M.; Haeberlen, A.; Khanna, S.; Narayan, A.; Pierce, B.C.; Roth, A., “Differential Privacy: An Economic Method for Choosing Epsilon,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 398, 410, 19-22 July 2014. doi:10.1109/CSF.2014.35
Abstract: Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
Keywords: data analysis; data privacy; Epsilon; data analyst; differential privacy; differentially private algorithms; economic method; privacy guarantee; Accuracy; Analytical models; Cost function; Data models; Data privacy; Databases; Privacy; Differential Privacy (ID#: 15-5744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957125&isnumber=6957090

 

Tan, A.Z.Y.; Wen Yong Chua; Chang, K.T.T., “Location Based Services and Information Privacy Concerns among Literate and Semi-literate Users,” System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp. 3198, 3206, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.394
Abstract: Location-based services mobile applications are becoming increasingly prevalent to the large population of semi-literate users living in emerging economies due to the low costs and ubiquity. However, usage of location-based services is still threatened by information privacy concerns. Studies typically only addressed how to mitigate information privacy concerns for the literate users and not the semi-literate users. To fill that gap and better understand information privacy concerns among different communities, this study draws upon theories of perceptual control and familiarity to identify the antecedents of information privacy concerns related to location-based service and user literacy. The proposed research model is empirically tested in a laboratory experiment. The findings show that the two location-based service channels (push and pull) affect the degree of information privacy concerns between the literate and semi-literate users. Implications for enhancing usage intentions and mitigating information privacy concerns for different types of mobile applications are discussed.
Keywords: data privacy; mobile computing; social aspects of automation; emerging economies; information privacy concerns; laboratory experiment; location-based service channels; mobile applications; pull channel; push channel; semiliterate users; usage intentions; user literacy; Analysis of variance; Educational institutions; Mobile communication; Mobile handsets; Privacy; Standards (ID#: 15-5745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758998&isnumber=6758592

 

Zheng Yan; Xueyun Li; Kantola, R., “Personal Data Access Based on Trust Assessment in Mobile Social Networking,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 989, 994, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.131
Abstract: Trustworthy personal data access control at a semi-trusted or distrusted Cloud Service Provider (CSP) is a practical issue although cloud computing has widely developed. Many existing solutions suffer from high computation and communication costs, and are impractical to deploy in reality due to usability issue. With the rapid growth and popularity of mobile social networking, trust relationships in different contexts can be assessed based on mobile social networking activities, behaviors and experiences. Obviously, such trust cues extracted from social networking are helpful in automatically managing personal data access at the cloud with sound usability. In this paper, we propose a scheme to secure personal data access at CSP according to trust assessed in mobile social networking. Security and performance evaluations show the efficiency and effectiveness of our scheme for practical adoption.
Keywords: authorisation; cloud computing; mobile computing; social networking (online); trusted computing; CSP; cloud computing; cloud service provider; mobile social networking; trust assessment; trustworthy personal data access control; Access control; Complexity theory; Context; Cryptography; Mobile communication; Mobile computing; Social network services; Trust; access control; cloud computing; reputation; social networking; trust assessment (ID#: 15-5746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011357&isnumber=7011202

 

Ta-Chih Yang; Ming-Huang Guo, “An A-RBAC Mechanism for a Multi-Tenancy Cloud Environment,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934436
Abstract: With the evolution of software technology, companies require more high-performance hardware to enhance their competitiveness. Cloud computing is the result of distributed computing and grid computing processes and is gradually being seen as the solution to the companies. Cloud computing can virtualizes existing software and hardware to reduce costs. Thus, companies only require high Internet bandwidth and devices to access cloud service on the Internet. This would decrease many overhead costs and the number of IT staff required. When many companies rent a cloud service simultaneously, this is called a multi-tenancy cloud service. However, how to access resource safely is important if adopt multi-tenancy cloud computing technology. The cloud computing environment is vulnerable to network-related attacks. This research improves the role-based access control authorization mechanism and combines it with attribute check mechanism to determine which tenant that user can access. The enhanced authorization can improve the safety of cloud computing services and protected the data privacy.
Keywords: authorisation; cloud computing; data privacy; grid computing; A-RBAC mechanism; IT staff; attribute check mechanism; cloud computing; cloud service; data privacy; distributed computing; grid computing processes; high Internet bandwidth; high-performance hardware; multitenancy cloud computing technology; multitenancy cloud environment; network-related attacks; role-based access control authorization mechanism; software technology; Authentication; Authorization; Cloud computing; Companies; Cryptography;Servers; Attribute;Authorization; Multi-tenancy; Role-based access control (ID#: 15-5747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934436&isnumber=6934393

 

Boyang Zhou; Wen Gao; Shanshan Zhao; Xinjia Lu; Zhong Du; Chunming Wu; Qiang Yang, “Virtual Network Mapping for Multi-Domain Data Plane in Software-Defined Networks,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934439
Abstract: Software-Defined Networking (SDN) separates the control plane from the data plane to improve the control flexibility, supporting multiple services with their isolated physical resources. In SDN, the virtual network (VN) mapping is required by network services for allocating these resources in the multidomain SDN. Such mapping problem is challenged by the NP-Completeness of the mapping and business privacy to protect the domain topology. We propose a novel multi-domain mapping algorithm for SDN using a distributed architecture to achieve a better efficiency and flexibility than the traditional PolyViNE approach, meanwhile protecting the privacy. By simulating on a large synthesized topology with 10 to 40 domains, our approach shows 25% and 15% faster than the PolyViNE in time, and 30% better in balancing load on multiple controllers.
Keywords: computational complexity; computer network security; data protection; resource allocation; telecommunication network topology; virtual private networks; NP-complete; PolyViNE approach; SDN; VN mapping; business privacy; control plane; data plane; distributed architecture; domain topology protection; load balancing; multidomain data plane; multidomain mapping algorithm; resource allocation; software-defined network; virtual network mapping; Bandwidth; Computer architecture; Control systems; Heuristic algorithms; Network topology; Partitioning algorithms; Topology; Network Management; Software-Defined Networking; Virtual Network Mapping (ID#: 15-5748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934439&isnumber=6934393

 

Kia, S.S.; Cortes, J.; Martinez, S., “Periodic and Event-Triggered Communication for Distributed Continuous-Time Convex Optimization,” American Control Conference (ACC), 2014, vol., no., pp. 5010, 5015, 4-6 June 2014. doi:10.1109/ACC.2014.6859122
Abstract: We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
Keywords: convex programming; directed graphs; network theory (graphs); Zeno behavior; connected undirected graph; convex function; cost functions; distributed continuous-time algorithm; distributed continuous-time convex optimization; event-triggered communication; global cost function; network optimization problem; periodic communication; privacy preservation properties; strongly connected weight-balanced directed graph; synchronous periodic communication scheme; Algorithm design and analysis; Convergence; Convex functions; Cost function; Privacy; Topology; Control of networks; Optimization algorithms (ID#: 15-5749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859122&isnumber=6858556

 

Tams, B.; Rathgeb, C., “Towards Efficient Privacy-Preserving Two-Stage Identification for Fingerprint-Based Biometric Cryptosystems,” Biometrics (IJCB), 2014 IEEE International Joint Conference on, vol., no., pp. 1, 8, Sept. 29 2014 - Oct. 2 2014. doi:10.1109/BTAS.2014.6996241
Abstract: Biometric template protection schemes in particular, biometric cryptosystems bind secret keys to biometric data, i.e. complex key retrieval processes are performed at each authentication attempt. Focusing on biometric identification exhaustive 1: N comparisons are required for identifying a biometric probe. As a consequence comparison time frequently dominates the overall computational workload, preventing biometric cryptosystems from being operated in identification mode. In this paper we propose a computational efficient two-stage identification system for fingerprint-biometric cryptosystems. Employing the concept of adaptive Bloom filter-based cancelable biometrics, pseudonymous binary prescreeners are extracted based on which top-candidates are returned from a database. Thereby the number of required key-retrieval processes is reduced to a fraction of the total. Experimental evaluations confirm that, by employing the proposed technique, biometric cryptosystems, e.g. fuzzy vault scheme, can be enhanced in order to enable a real-time privacy preserving identification, while at the same time biometric performance is maintained.
Keywords: biometrics (access control); data privacy; data structures; fingerprint identification; fuzzy set theory; image retrieval; private key cryptography; adaptive Bloom filter-based cancelable biometrics; biometric performance analysis; biometric probe identification; biometric template protection schemes; comparison time; complex key retrieval processes; computational efficient two-stage identification system; computational workload; data authentication; fingerprint-based biometric cryptosystems; fuzzy vault scheme; privacy-preserving two-stage identification; pseudonymous binary prescreener extraction; real-time privacy preserving identification; secret keys; Authentication; Cryptography; Databases; Fingerprint recognition; Measurement; Privacy (ID#: 15-5750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996241&isnumber=6996217

 

Krombi, W.; Erradi, M.; Khoumsi, A., “Automata-Based Approach to Design and Analyze Security Policies,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 306, 313, 23-24 July 2014. doi:10.1109/PST.2014.6890953
Abstract: Information systems must be controlled by security policies to protect them from undue accesses. Security policies are often designed by rules expressed using informal text, which implies ambiguities and inconsistencies in security rules. Our objective in this paper is to develop a formal approach to design and analyze security policies. We propose a procedure that synthesizes an automaton which implements a given security policy. Our automata-based approach can be a common basis to analyze several aspects of security policies. We use our automata-based approach to develop three analysis procedures to: verify completeness of a security policy, detect anomalies in a security policy, and detect functional discrepancies between several implementations of a security policy. We illustrate our approach using examples of security policies for a firewall.
Keywords: automata theory; data protection; firewalls; information systems; anomaly detection; automata synthesis; automata-based approach; firewall security policies; formal approach; functional discrepancy detection; information system protection; security policy analysis; security policy completeness verification; security policy design; Automata; Boolean functions; Data structures; Educational institutions; Firewalls (computing); Protocols (ID#: 15-5751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890953&isnumber=6890911

 

Anggorojati, B.; Prasad, N.R.; Prasad, R., “Secure Capability-Based Access Control in the M2M Local Cloud Platform,” Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on, vol., no., pp. 1, 5, 11-14 May 2014. doi:10.1109/VITAE.2014.6934469
Abstract: Protection and access control to resources plays a critical role in a distributed computing system like Machine-to-Machine (M2M) and cloud platform. The M2M local cloud platform considered in this paper, consists of multiple distributed M2M gateways that form a local cloud - presenting a unique challenge to the existing access control systems. The most prominent access control systems, such as ACL and RBAC, lack in scalability and flexibility to manage access from users or entity that belong to different authorization domains, and thus unsuitable for the presented platform. The access control approach based on API keys and OAuth that is used by the existing M2M Cloud platform, fails to provide fine grained and flexible access right delegation at the same time when both methods are used together. The proposed approach is built upon capability-based access control that has been specifically designed to provide flexible, yet restricted, access rights delegation. A number of use cases are provided to show the usage of capability creation, delegation, and access provision, particularly in the way application accesses services provided by the platform.
Keywords: application program interfaces; authorisation; cloud computing; computer network security; internetworking; network servers; private key cryptography; API key; M2M local cloud platform; OAuth; application programming interface; authorization domain; distributed computing system; machine-to-machine computing system; multiple distributed M2M gateway; secure capability based access control system; Access control; Buildings; Context; Permission; Privacy; Public key; M2M; access control; capability; cloud; delegation; security (ID#: 15-5752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934469&isnumber=6934393

 

Lugini, L.; Marasco, E.; Cukic, B.; Dawson, J., “Removing Gender Signature from Fingerprints,” Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, vol., no., pp. 1283, 1287, 26-30 May 2014. doi:10.1109/MIPRO.2014.6859765
Abstract: The need of sharing fingerprint image data in many emerging applications raises concerns about the protection of privacy. It has become possible to use automated algorithms for inferring soft biometrics from fingerprint images. Even if we cannot uniquely match the person to an existing fingerprint, revealing their age or gender may lead to undesirable consequences. Our research is focused on de-identifying fingerprint images in order to obfuscate soft biometrics. In this paper, we first discuss a general framework for soft biometrics fingerprint de-identification. We implemented the framework to reduce the risk of successful estimation of gender from fingerprint images using ad-hoc image filtering. We evaluate the proposed approach through experiments using a data set of rolled fingerprints collected at West Virginia University. Results show the proposed method is effective in preventing gender estimation from fingerprint images.
Keywords: data privacy; filtering theory; fingerprint identification; ad-hoc image filtering; gender estimation prevention; gender signature removal; privacy protection; rolled fingerprints; soft biometrics fingerprint deidentification; Biometrics (access control); Estimation; Feature extraction; Fingerprint recognition; Frequency-domain analysis; Privacy; Probes; Fingerprint Recognition; Gender Estimation; Image De-Identification}, (ID#: 15-5753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859765&isnumber=6859515

 

Premarathne, U.S.; Khalil, I., “Multiplicative Attributes Graph Approach for Persistent Authentication in Single-Sign-On Mobile Systems,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 221, 228, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.33
Abstract: Single-sign-on (SSO) has been proposed as a more efficient and convenient authentication method. Classic SSO systems re-authenticate a user to different applications based on a fixed set of attributes (e.g. Username-password combinations). However, the use of a fixed set of attributes fail to account for mobility and contextual variations of user activities. Thus, in a SSO based system, robust persistent authentications and secure session termination management are vital for ensuring secure operations. In this paper we propose a novel persistent authentication technique using multiplicative attribute graph model. We use multiple attribute based persistent authentication model using facial biometrics, location and activity specific information. We propose a novel membership (or group affiliations) based session management technique for user initiated SSO global logout management. Significance and viability of these methods are demonstrated by security, complexity and numerical analyses. In conclusion, our model provides meaningful insights and more pragmatic approaches for persistent authentication and session termination management in implementing SSO based mobile collaborative applications.
Keywords: authorisation; biometrics (access control); graph theory; mobile computing; SSO based mobile collaborative applications; SSO global logout management; activity specific information; contextual variations; facial biometrics; location information; membership based session management technique; mobility variations; multiple attribute based persistent authentication model; multiplicative attribute graph approach; robust persistent authentications; secure session termination management; single-sign-on mobile systems; Authentication; Biological system modeling; Biometrics (access control); Collaboration; Face; Mobile communication; mobile systems; multiplicative attribute graph; persistent authentication; single sign on (ID#: 15-5754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011254&isnumber=7011202

 

Jianming Fu; Yan Lin; Xu Zhang; Pengwei Li, “Computation Integrity Measurement Based on Branch Transfer,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 590, 597, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.75
Abstract: Tasks are selectively migrated to the cloud with the widespread adoption of the cloud computing platform, but the user cannot know whether the tasks are tampered in the cloud, so it is an urgent demand for cloud users to verify the execution integrity of the program in the cloud. The computation integrity measurement based on behavior is difficult to detect carefully crafted shell code. According to the property of shell code, this paper proposes a computation integrity measurement based on branch transfer called CIMB, which is a fine-grained instruction-level integrity measurement. In this approach, all branches in the user-level have been recorded, which effectively cover all execution control flow of a program, and CIMB can detect control-flow hijacking attacks without the support of source code, such as Return-oriented Programming (ROP) and Jump-oriented Programming (JOP). Meanwhile, distance between two instruction addresses and machine code of instruction can mask the measurement inconsistency derived from address space layout randomization of program and shared libraries. Finally, we have implemented CIMB with a dynamic binary instrumentation tool Pin on ×86 32-bit version of ubuntu12.04. Its experimental results show that CIMB is feasible and it has a relatively stable measurement result, and the advantages of CIMB and factors affecting the results of measurement are analyzed and discussed.
Keywords: cloud computing; data integrity; trusted computing; CIMB; Pin dynamic binary instrumentation tool; address space layout randomization; branch transfer; cloud computing platform; cloud users; computation integrity measurement; control-flow hijacking attack detection; fine-grained instruction-level integrity measurement; instruction addresses; instruction machine code; measurement inconsistency; program execution control flow; program execution integrity verification; shellcode detection; tampered tasks; ubuntu12.04; user-level; Complexity theory; Current measurement; Fluid flow measurement; Instruments; Libraries; Linux; Software measurement; computation integrity; control flow; dynamic binary instrumentation; integrity measurement; trusted computing (ID#: 15-5755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011299&isnumber=7011202

 

Sefer, E.; Kingsford, C., “Diffusion Archaeology for Diffusion Progression History Reconstruction,” Data Mining (ICDM), 2014 IEEE International Conference on, vol., no., pp. 530, 539, 14-17 Dec. 2014. doi:10.1109/ICDM.2014.135
Abstract: Diffusion through graphs can be used to model many real-world process, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring—perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on sub modularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.
Keywords: data handling; diffusion; discrete time systems; graph theory; maximum likelihood estimation; PCDSVC relaxation; contaminant diffusion; continuous monitoring; data access; diffusion archaeology; diffusion history reconstruction; diffusion progression history reconstruction; diffusion state; discrete-time SEIRS-type diffusion model; disease spreader; graph; maximum likelihood; partial diffusion data problem; performance guarantee; prize-collecting dominating-set vertex cover relaxation; real-world diffusion; real-world process; temporal characteristics; Approximation methods; Computational modeling; Computers; History; Integrated circuit modeling; Mathematical model; Silicon; diffusion; epidemics; history (ID#: 15-5756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023370&isnumber=7023305

 

Wen Zeng; Koutny, M.; Van Moorsel, A., “Performance Modelling and Evaluation of Enterprise Information Security Technologies,” Computer and Information Technology (CIT), 2014 IEEE International Conference on, vol., no., pp. 504, 511, 11-13 Sept. 2014. doi:10.1109/CIT.2014.18
Abstract: By providing effective access control mechanisms, enterprise information security technologies have been proven successful in protecting the confidentiality of sensitive information in business organizations. However, such security mechanisms typically reduce the work productivity of the staff, by making them spend time working on non-project related tasks. Therefore, organizations have to invest a signification amount of capital in the information security technologies, and then to continue incurring additional costs. In this study, we investigate the performance of administrators in an information help desk, and the non-productive time (NPT) in an organization, resulting from the implementation of information security technologies. An approximate analytical solution is discussed first, and the loss of staff member productivity is quantified using non-productive time. Stochastic Petri nets are then used to provide simulation results. The presented study can help information security managers to make investment decisions, and to take actions toward reducing the cost of information security technologies, so that a balance is kept between information security expense, resource drain and effectiveness of security technologies.
Keywords: Petri nets; authorisation; business data processing; cost reduction; data privacy; decision making; investment; productivity; stochastic processes; NPT; access control mechanisms; business organizations; cost reduction enterprise information security technologies; information help desk; investment decision making; nonproductive time; performance evaluation; performance modelling; sensitive information confidentiality; staff member productivity; stochastic Petri nets; work productivity; Information security; Mathematical model; Organizations; Servers; Stochastic processes; Non-productive Time; Queuing Theory; Security Investment Decision; Stochastic Petri Nets (ID#: 15-5757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984703&isnumber=6984594


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Control Theory and Privacy, 2014, Part 2

 

 
SoS Logo

Control Theory and Privacy, 2014

Part 2


In the Science of Security, control theory offers methods and approaches to potentially solve hard problems. The research work presented here specifically addresses issues in privacy. The work was presented in 2014.


Le Ny, J.; Mohammady, M., “Differentially Private MIMO Filtering for Event Streams and Spatio-Temporal Monitoring,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2148, 2153, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039716
Abstract: Many large-scale systems such as intelligent transportation systems, smart grids or smart buildings collect data about the activities of their users to optimize their operations. In a typical scenario, signals originate from many sensors capturing events involving these users, and several statistics of interest need to be continuously published in real-time. Moreover, in order to encourage user participation, privacy issues need to be taken into consideration. This paper considers the problem of providing differential privacy guarantees for such multi-input multi-output systems operating continuously. We show in particular how to construct various extensions of the zero-forcing equalization mechanism, which we previously proposed for single-input single-output systems. We also describe an application to privately monitoring and forecasting occupancy in a building equipped with a dense network of motion detection sensors, which is useful for example to control its HVAC system.
Keywords: MIMO systems; filtering theory; sensors; HVAC system; differential privacy; differentially private MIMO filtering; event streams; intelligent transportation systems; large-scale systems; motion detection sensors; single-input single-output systems; smart buildings; smart grids; spatio temporal monitoring; zero-forcing equalization mechanism; Buildings; MIMO; Monitoring; Noise; Privacy; Sensitivity; Sensors (ID#: 15-5758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039716&isnumber=7039338

 

Distl, B.; Hossmann, T., “Privacy in Opportunistic Network Contact Graphs,” A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, vol., no., pp. 1, 3, 19-19 June 2014. doi:10.1109/WoWMoM.2014.6919020
Abstract: Opportunistic networks are formed by people carrying mobile devices with wireless capabilities. When in mutual transmission range, the nodes of such networks use device-to-device communication to automatically exchange data, without requiring fixed infrastructure. To solve challenging opportunistic networking problems like routing, nodes exchange information about whom they have met in the past and form a contact graph, which encodes the social structure of past meetings. This contact graph is then used to assign a utility to each node (e.g., based on their centrality), thereby defining a ranking of the nodes' values for carrying a message. However, while being a useful tool, the contact graph represents a privacy risk to the users, as it allows an attacker to learn about social links. In this paper, we investigate the trade-off of privacy and utility in the contact graph. By transforming the graph through adding and removing edges, we are able to control the amount of link privacy. The evaluation of a greedy approach shows that it maintains the node ranking very well, even if many links are changed.
Keywords: data privacy; graph theory; mobile computing; smart phones; telecommunication network routing; link privacy; node ranking; opportunistic network contact graphs; opportunistic network routing; past meeting recording; privacy risk; social structure recording; Approximation algorithms; Correlation; Greedy algorithms; Measurement; Mobile handsets; Privacy; Routing (ID#: 15-5759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919020&isnumber=6918912

 

Han Vinck, A.J.; Jivanyan, A.; Winzen, J., “Gaussian Fuzzy Commitment,” Information Theory and Its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 571, 574, 26-29 Oct. 2014. doi:(not provided)
Abstract: We discuss the protection of Gaussian biometric templates. We first introduce the Juels-Wattenberg for binary biometrics, where the binary biometrics are a result of hard-quantized Gaussian biometrics. The Juels-Wattenberg scheme adds a random binary code word to the biometric for privacy reasons and to allow errors in the biometric at authentication. We modify the Juels-Wattenberg scheme in such a way that we do not have to quantize the biometrics. We investigate and compare the performance of both approaches.
Keywords: Gaussian processes; authorisation; biometrics (access control); data privacy; fuzzy set theory; Gaussian biometric template protection; Gaussian fuzzy commitment; Juels-Wattenberg scheme; binary biometrics; hard-quantized Gaussian biometrics; random binary code word; Australia; Authentication; Decoding; Error analysis; Error correction codes; Noise; Vectors (ID#: 15-5760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979908&isnumber=6979787

 

Prasad, M.; Chou, K.P.; Saxena, A.; Kawrtiya, O.P.; Li, D.L.; Lin, C.T., “Collaborative Fuzzy Rule Learning for Mamdani Type Fuzzy Inference System with Mapping of Cluster Centers,” Computational Intelligence in Control and Automation (CICA), 2014 IEEE Symposium on, vol., no., pp. 1, 6, 9-12 Dec. 2014. doi:10.1109/CICA.2014.7013227
Abstract: This paper demonstrates a novel model for Mamdani type fuzzy inference system by using the knowledge learning ability of collaborative fuzzy clustering and rule learning capability of FCM. The collaboration process finds consistency between different datasets, these datasets can be generated at various places or same place with diverse environment containing common features space and bring together to find common features within them. For any kind of collaboration or integration of datasets, there is a need of keeping privacy and security at some level. By using collaboration process, it helps fuzzy inference system to define the accurate numbers of rules for structure learning and keeps the performance of system at satisfactory level while preserving the privacy and security of given datasets.
Keywords: fuzzy reasoning; fuzzy set theory; learning (artificial intelligence); pattern clustering; Mamdani type fuzzy inference system; cluster centers mapping; collaboration process; collaborative fuzzy clustering; collaborative fuzzy rule learning; knowledge learning ability; Brain modeling; Collaboration; Data models; Fuzzy logic; Knowledge based systems; Mathematical model; Prototypes; collaboration process; collaborative fuzzy clustering (CFC); fuzzy c-means (FCM); fuzzy inference system; privacy and security; structure learning (ID#: 15-5761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013227&isnumber=7013220

 

Ignatenko, T.; Willems, F.M.J., “Privacy-Leakage Codes for Biometric Authentication Systems,” Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, vol., no., pp. 1601, 1605, 4-9 May 2014. doi:10.1109/ICASSP.2014.6853868
Abstract: In biometric privacy-preserving authentication systems that are based on key-binding, two terminals observe two correlated biometric sequences. The first terminal selects a secret key, which is independent of the biometric data, binds this secret key to the observed biometric sequence and communicates it to the second terminal by sending a public message. This message should only contain a negligible amount of information about the secret key, but also leak as little as possible about the biometric data. Current approaches to realize such biometric systems use fuzzy commitment with codes that, given a secret-key rate, can only achieve the corresponding privacy-leakage rate equal to one minus this secret-key rate. However, the results in Willems and Ignatenko [2009] indicate that lower privacy leakage can be achieved if vector quantization is used at the encoder. In this paper we study the use of convolutional and turbo codes applied in fuzzy commitment and its modifications that realize this.
Keywords: biometrics (access control); convolutional codes; correlation theory; data privacy; fuzzy set theory; message authentication; sequential codes; turbo codes; vector quantisation; biometric authentication system; biometric privacy preserving authentication system; biometric sequence; convolutional codes; correlated biometric sequences; encoder; fuzzy commitment; privacy leakage codes; privacy leakage rate; public message sending; secret key rate; turbo codes; vector quantization; Authentication; Biometrics (access control); Convolutional codes; Decoding; Privacy; Quantization (signal); Signal to noise ratio; BCH codes; Biometric authentication; convolutional codes; privacy; turbo codes (ID#: 15-5762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853868&isnumber=6853544

 

Al-Abdulkarim, L.; Molin, E.; Lukszo, Z.; Fens, T., “Acceptance of ICT-Intensive Socio-Technical Infrastructure Systems: Smart Metering Case in the Netherlands,” Networking, Sensing and Control (ICNSC), 2014 IEEE 11th International Conference on, vol., no., pp. 399, 404, 7-9 April 2014. doi:10.1109/ICNSC.2014.6819659
Abstract: There are several initiatives worldwide to deploy SMs (SM). SM systems offer services aimed at achieving many goals beyond metering electricity consumption of households. Despite the advantages gained by SMs, there are serious issues that may lead to the system's inability to reach its goals. One obstacle, which can lead to social rejection of SMs, is perceived security and privacy violations of consumers' information. This poses a significant threat to a successful rollout and operation of the system as consumers represent a cornerstone in the fulfillment of goals such as energy efficiency and savings, by their active interaction with SMs. To investigate consumers' perception of SMs, theories and models from the technology acceptance literature can be used for understanding consumers' behaviors, and exploring possible factors that can have a significant impact on consumers' acceptance and usage of a SM. In this paper, a hybrid and extended model of a two well-known technology acceptance theories is presented. These theories are: the Unified Theory of Acceptance and Usage of Technology- UTAUT, and Innovation Diffusion Theory- IDT. The hybrid model is further extended with acceptance determinants derived from the Smart metering case in the Dutch context. The model aims to investigate determinants that can shed light on consumers' perception and acceptance of SM.
Keywords: consumer behaviour; domestic appliances; electricity supply industry; energy conservation; innovation management; power consumption; power system security; smart meters; Dutch context; ICT-intensive socio-technical infrastructure system; IDT; Netherlands; SM systems; UTAUT; acceptance determinants; consumer acceptance; consumer behaviors; consumer information; consumer perception; consumer usage; electricity consumption metering; energy efficiency; energy savings; households; innovation diffusion theory; privacy violations; security violations; smart metering case; social rejection; technology acceptance literature; technology acceptance theories; unified theory of acceptance and usage of technology; Reliability; System-on-chip; Critical infrastructures; Information security and privacy; Smart metering; Social acceptance; Socio-technical systems (ID#: 15-5763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819659&isnumber=6819588

 

Chi Chen; Chaogang Wang; Tengfei Yang; Dongdai Lin; Song Wang; Jiankun Hu, “Optional Multi-Biometric Cryptosystem Based on Fuzzy Extractor,” Fuzzy Systems and Knowledge Discovery (FSKD), 2014 11th International Conference on, vol., no., pp. 989, 994, 19-21 Aug. 2014. doi:10.1109/FSKD.2014.6980974
Abstract: Following the wide use of smart devices, biometric cryptosystem is used to protect users' privacy data. However, biometric cryptosystem is rarely used in the scenario of mobile cloud, because the biometric sensors are different on various devices. In this paper, an optional multi-biometric cryptosystem based on fuzzy extractor and secret share technology is proposed. Each of the enrolled biometric modality generates a feature vector, and then the feature vector is put into a fuzzy extractor to get a stable codeword, namely a bit-string. All the codewords are used to bind a random key based on a secret share method, and the key can be used to encrypt users' privacy data. During the verification phase, part of the enrolled biometric modalities are enough to recover the random key. Therefore, the proposed scheme can provide a user the same biometric key on different devices. In addition, experiment on a virtual multi-biometric database shows that the novel concept of optional multi-biometric cryptosystem is better than the corresponding uni-biometric cryptosystem both in matching accuracy and key entropy.
Keywords: biometrics (access control); cloud computing; cryptography; entropy; fuzzy set theory; mobile computing; vectors; bit-string; codewords; feature vector; fuzzy extractor; key entropy; mobile cloud; optional multibiometric cryptosystem; smart devices; users privacy data; Accuracy; Cryptography; Databases; Feature extraction; Fingerprint recognition; Iris recognition; cryptosystem; fuzzy extractor; key generation; mobile cloud; multi-biometric; secret share (ID#: 15-5764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980974&isnumber=6980796

 

Barber, R.F.; Duchi, J., “Privacy: A Few Definitional Aspects and Consequences for Minimax Mean-Squared Error,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1365, 1369, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039572
Abstract: We explore several definitions of “privacy” in statistical estimation and data analysis. We present and review definitions that attempt to capture what, intuitively, it should mean to limit disclosures from the output of a statistical estimation task, providing minimax upper and lower bounds on mean squared error for estimation problems under several common (and some new) definitions of privacy.
Keywords: data analysis; data privacy; estimation theory; mean square error methods; minimax techniques; statistical analysis; data analysis; data privacy; minimax mean-squared error; statistical estimation; Computer science; Convergence; Data analysis; Data privacy; Estimation; Privacy; Testing (ID#: 15-5765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039572&isnumber=7039338

 

Singh, K.; Jian Zhong; Batten, L.; Bertok, P., “A Solution for Privacy-Preserving, Remote Access to Sensitive Data,” Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 309, 313, 26-29 Oct. 2014. doi:(not provided)
Abstract: Sharing data containing sensitive information, such as medical records, always has privacy and security implications. In situations such as health environments, accurate individual data needs to be provided while at the same time, mass data release for medical research may also be required. This paper outlines a solution for maintaining the privacy of data released en masse in a controlled manner as well as for providing secure access to the original data for authorized users. Our solution maintains privacy in a more efficient manner than do previous solutions.
Keywords: data privacy; data sharing; remote access; sensitive data; sensitive information; Computer architecture; Data privacy; Encryption; Privacy; Protocols (ID#: 15-5766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979854&isnumber=6979787

 

Pradhan, P.; Venkitasubramaniam, P., “Under the Radar Attacks in Dynamical Systems: Adversarial Privacy Utility Tradeoffs,” Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 242, 246, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970829
Abstract: Cyber physical systems which integrate physical system dynamics with digital cyber infrastructure are envisioned to transform our core infrastructural frameworks such as the smart electricity grid, transportation networks and advanced manufacturing. This integration however exposes the physical system functioning to the security vulnerabilities of cyber communication. Both scientific studies and real world examples have demonstrated the impact of data injection attacks on state estimation mechanisms on the smart electricity grid. In this work, an abstract theoretical framework is proposed to study data injection/modification attacks on Markov modeled dynamical systems from the perspective of an adversary. Typical data injection attacks focus on one shot attacks by adversary and the non-detectability of such attacks under static assumptions. In this work we study dynamic data injection attacks where the adversary is capable of modifying a temporal sequence of data and the physical controller is equipped with prior statistical knowledge about the data arrival process to detect the presence of an adversary. The goal of the adversary is to modify the arrivals to minimize a utility function of the controller while minimizing the detectability of his presence as measured by the KL divergence between the prior and posterior distribution of the arriving data. Adversarial policies and tradeoffs between utility and detectability are characterized analytically using linearly solvable control optimization.
Keywords: Markov processes; radar; telecommunication security; Markov modeled dynamical systems; advanced manufacturing; adversarial privacy utility tradeoffs; core infrastructural frameworks; cyber communication; cyber physical systems; data arrival process; data injection attacks; digital cyber infrastructure; dynamic data injection attacks; dynamical systems; physical system dynamics; radar attacks; security vulnerabilities; smart electricity grid; state estimation mechanisms; temporal sequence; transportation networks; Markov processes; Mathematical model; Power system dynamics; Privacy; Process control; Smart grids; State estimation (ID#: 15-5767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970829&isnumber=6970773

 

Jiyun Yao; Venkitasubramaniam, P., “The Privacy Analysis of Battery Control Mechanisms in Demand Response: Revealing State Approach and Rate Distortion Bounds,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1377, 1382, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039594
Abstract: Perfect knowledge of a user's power consumption profile by a utility is a violation of privacy and can be detrimental to the successful implementation of demand response systems. It has been shown that an in-home energy storage system which provides a viable means to achieve the cost savings of instantaneous electricity pricing without inconvenience can also be used to maintain the privacy of a user's power profile. The optimization of the tradeoff between privacy, as measured by Shannon entropy, and cost savings that can be provided by a finite capacity battery with zero tolerance for delay is known to be equivalent to a Partially Observable Markov Decision Process with non linear belief dependent rewards- solutions to such systems suffer from high computational complexity. In this paper, we propose a “revealing state” approach to enable computation of a class of battery control policies that aim to maximize the achievable privacy of in-home demands. In addition, a rate-distortion approach is presented to derive upper bounds on the privacy-cost savings tradeoff of battery control policies. These bounds are derived for a discrete model, where demand and price follow i.i.d uniform distributions. Numerical results show that the derived bounds are quite close to each other demonstrating the efficacy of the proposed class of strategies.
Keywords: data privacy; demand side management; energy storage; rate distortion theory; secondary cells; stochastic systems; battery control mechanisms; demand response; in-home demands; in-home energy storage system; privacy analysis; privacy-cost savings tradeoff; rate distortion bounds; rate-distortion approach; revealing state approach; stochastic control; uniform distributions; Batteries; Electricity; Entropy; Optimization; Privacy; Upper bound; Demand Response; Entropy; Privacy; Random Walk; Scheduling; Storage; Utility (ID#: 15-5768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039594&isnumber=7039338

 

Pequito, S.; Kar, S.; Sundaram, S.; Aguiar, A.P., “Design of Communication Networks for Distributed Computation with Privacy Guarantees,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 1370, 1376, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039593
Abstract: In this paper we address a communication network design problem for distributed computation with privacy guarantees. More precisely, given a possible communication graph between different agents in a network, the objective is to design a protocol, by proper selection of the weights in the dynamics induced by the communication graph, such that 1) weighted average consensus of the initial states of all the agents will be reached; and 2) there are privacy guarantees, where each agent is not able to retrieve the initial states of non-neighbor agents, with the exception of a small subset of agents (that will be precisely characterized). In this paper, we assume that the network is cooperative, i.e., each agent is passive in the sense that it executes the protocol correctly and does not provide incorrect information to its neighbors, but may try to retrieve the initial states of non-neighbor agents. Furthermore, we assume that each agent knows the communication protocol.
Keywords: cooperative communication; graph theory; multi-agent systems; protocols; communication graph; communication network design; communication protocol; cooperative network; distributed computation; network agent; nonneighbor agent; privacy guarantee; weighted average consensus; Bipartite graph; Computational modeling; Computers; Educational institutions; Privacy; Protocols; Tin (ID#: 15-5769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039593&isnumber=7039338

 

Papadopoulos, A.; Czap, L.; Fragouli, C., “Secret Message Capacity of a Line Network,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 1341, 1348, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028611
Abstract: We investigate the problem of information theoretically secure communication in a line network with erasure channels and state feedback. We consider a spectrum of cases for the private randomness that intermediate nodes can generate, ranging from having intermediate nodes generate unlimited private randomness, to having intermediate nodes generate no private randomness, and all cases in between. We characterize the secret message capacity when either only one of the channels is eavesdropped or all of the channels are eavesdropped, and we develop polynomial time algorithms that achieve these capacities. We also give an outer bound for the case where an arbitrary number of channels is eavesdropped. Our work is the first to characterize the secrecy capacity of a network of arbitrary size, with imperfect channels and feedback.
Keywords: channel capacity; computational complexity; data privacy; network theory (graphs); state feedback; telecommunication security; erasure channels; imperfect channels; information theoretically secure communication problem; intermediate nodes; line network; polynomial time algorithms; private randomness; secret message capacity; state feedback; Automatic repeat request; Random variables; Receivers; Relays; Security; State feedback; Vectors (ID#: 15-5770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028611&isnumber=7028426

 

Bounagui, Y.; Hafiddi, H.; Mezrioui, A., “Challenges for IT Based Cloud Computing Governance,” Intelligent Systems: Theories and Applications (SITA-14), 2014 9th International Conference on, vol., no., pp. 1, 8, 7-8 May 2014. doi:10.1109/SITA.2014.6847289
Abstract: For some years now, the concept of Cloud Computing (CC) is presented as the new revolution of information technology. It presents not only a technical innovation for better IT system flexibility, improvement of working methods and cost control, but also a new economic model, built around the concept of IT Services that are identifiable, classifiable and countable for end users, who can benefit by paying for use without having to make huge investments. In this paper, we show that despite these advantages, the implementation of such a concept has an impact on the enterprise stakeholders (IT Direction, Business Direction, Suppliers Direction, etc.). Many aspects must be managed differently from traditional systems. Availability, security, privacy and compliance are just some of the aspects that must be monitored and managed more effectively. Thus, the IT based CC governance is a necessity in terms of defining good management practices, especially because there is a lack of an adapted Governance Framework. The current IT governance practices/standards (ITIL, COBIT, ISO2700x, etc.) still have many limitations: they are far from covering an “end-to-end” governance; they are difficult to use and to maintain and have many overlapping points. It becomes mandatory for companies to address these challenges and control the capabilities offered by the CC, develop cloud oriented policies that reflect their exact needs and to have a flexible, coherent and global IT based CC Governance Framework.
Keywords: business data processing; cloud computing; information technology; CC; IT based cloud computing governance; IT direction; IT services; IT system flexibility; adapted governance framework; business direction; cost control; economic model; enterprise stakeholders; information technology; suppliers direction; technical innovation; Automation; Computational modeling; Organizations; Reliability; Software; Standards organizations; Cloud Computing; Framework ; IT Governance; Security; Standards (ID#: 15-5771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847289&isnumber=6846554

 

Tiits, M.; Kalvet, T.; Mikko, K.-L., “Social Acceptance of ePassports,” Biometrics Special Interest Group (BIOSIG), 2014 International Conference of the, vol, no., pp. 1, 6, 10-12 Sept. 2014. doi:(not provided)
Abstract: Using large-scale web survey in six countries we study the societal readiness and acceptance of specific technology options in relation to the potential next generation of ePassports. We find that the public has only limited knowledge of the electronic data and functions ePassports include, and often have no clear opinion on various potential uses for ePassports and related personal data. Still, the public expects from ePassports improvements in protection from document forgery, accuracy and reliability of the identification of persons, and protection from identity theft. The main risks the public associates with ePassports includes the possible use of personal information for purposes other than those initially stated, and covert surveillance. Compared to earlier studies, our research shows that issues of possible privacy invasion and abuse of information are much more perceived by the public. There is a weak correlation between a persons' level of knowledge about ePassports and their willingness to accept the use of advanced biometrics, such as fingerprints or eye iris images, in different identity management and identity checking scenarios. Furthermore, the public becomes more undecided about ePassport applications as we move from the basic state of the art towards more advanced biometric technologies in various scenarios. The successful pathway to greater acceptability of the use of advanced biometrics in ePassports should start from the introduction of perceivably high-benefit and low-risk applications. As the public awareness is low, citizens' belief in government benevolence, i.e. the belief that the government acts in citizens' best interest, comes out as an important factor in the overall context.
Keywords: biometrics (access control); data privacy; government data processing; social aspects of automation; biometrics; ePassports social acceptance; government benevolence; identity checking scenarios; identity management; information abuse; privacy invasion; Context; Fingerprint recognition; Government; Iris recognition; Logic gates; Security; ePassports; social acceptance; unified theory of acceptance and use of technology (ID#: 15-5773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029408&isnumber=7029401

 

Xi Chen; Luping Zheng; Zengli Liu; Jiashu Zhang, “Privacy-Preserving Biometrics Using Matrix Random Low-Rank Approximation Approach,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 6, 12, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013085
Abstract: In this paper, we propose a matrix random low-rank approximation (MRLRA) approach to generate cancelable biometric templates for privacy-preserving. MRLRA constructs a random low-rank matrix to approximate the hybridization of biometric feature and a random matrix. Theoretically analysis shows the distance between one cancelable low-rank biometric template by MRLRA and its original template is very small, which results to the verification and authentication performance by MRLRA is near that of original templates. Cancelable biometric templates by MRLRA conquer the weakness of random projection based cancelable biometric templates, in which the performance will deteriorate much under the same tokens. Experiments have verified that (i) cancelable biometric templates by MRLRA are sensitive to the user-specific tokens which are used for constructing the random matrix in MRLRA; (ii) MRLRA can reduce the noise of biometric templates; (iii) Even under the condition of same tokens, the performance of cancelable biometric templates by MRLRA doesn't deteriorate much.
Keywords: approximation theory; biometrics (access control); data privacy; formal verification; matrix algebra; MRLRA approach; authentication; hybridization; matrix random low-rank approximation approach; privacy-preserving biometrics; verification; Approximation methods; Authentication; Biometrics (access control); Databases; Face; Feature extraction; Vectors; Cancelable biometric templates; Matrix random low-rank approximation; Privacy-preserving (ID#: 15-5774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013085&isnumber=7013076

 

Zheng Yan; Mingjun Wang; Peng Zhang, “A Scheme to Secure Instant Community Data Access Based on Trust and Contexts,” Computer and Information Technology (CIT), 2014 IEEE International Conference on, vol., no., pp. 646, 651, 11-13 Sept. 2014. doi:10.1109/CIT.2014.136
Abstract: Mobile Ad Hoc Networks provides a generic platform for instant social networking (ISN), such as instant community (IC). For a crucial talk in an instant community, it is important to set up a secure communication channel among trustworthy members in order to avoid malicious eavesdropping or narrow down member communication scope. Previous work hasn't yet considered how to control social communication data access based on trust and other attributes and suffered from a weakness in terms of complexity. In this paper, we propose a scheme to secure instant community data access based on trust levels, contexts and time clock in a fine-grained control manner by applying Attribute-Based Encryption. Any community member can select other members with at least a minimum level of trust for secure ISN communications. The advantages, security and performance of the proposed scheme are evaluated and justified through extensive analysis, security proof and implementation. The results show the efficiency and effectiveness of our scheme.
Keywords: cryptography; mobile ad hoc networks; mobile computing; social networking (online); trusted computing; ISN; attribute-based encryption; data access security; instant social networking; mobile ad hoc networks; trust levels; Access control; Communities; Complexity theory; Encryption; Integrated circuits; Privacy preserving; data mining; data perturbation; k-anonymity (ID#: 15-5775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984726&isnumber=6984594
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Cyber-Physical System Security and Privacy, 2014, Part 1

 

 
SoS Logo

Cyber-Physical System

Security and Privacy, 2014

Part 1


Cyber-Physical systems generally are systems where computers control physical entities. They exist in areas as diverse as automobiles, manufacturing, energy, transportation, chemistry, and computer appliances. In this bibliography, the primary focus of published research is in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources. Because of its strategic importance and the consequences of intrusion, smart grid is of particular importance to the Science of Security. The work presented here was published in 2014.


Armin Wasicek, Patricia Derler, Edward A. Lee. “Aspect-oriented Modeling of Attacks in Automotive Cyber-Physical Systems.” DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. doi:10.1145/2593069.2593095
Abstract: This paper introduces aspect-oriented modeling (AOM) as a powerful, model-based design technique to assess the security of Cyber-Physical Systems (CPS). Particularly in safety-critical CPS such as automotive control systems, the protection against malicious design and interaction faults is paramount to guaranteeing correctness and reliable operation. Essentially, attack models are associated with the CPS in an aspect-oriented manner to evaluate the system under attack. This modeling technique requires minimal changes to the model of the CPS. Using application-specific metrics, the designer can gain insights into the behavior of the CPS under attack.
Keywords: Aspect-oriented Modeling, Cyber-Physical Systems, Security (ID#: 15-5832)
URL: http://doi.acm.org/10.1145/2593069.2593095

 

Sven Wohlgemuth. “Is Privacy Supportive for Adaptive ICT Systems?” iiWAS '14 Proceedings of the 16th International Conference on Information Integration and Web-based Applications & Services, December 2014, Pages 559-570. doi:10.1145/2684200.2684363
Abstract: Adaptive ICT systems promise to improve resilience by re-using and sharing ICT services and information related to electronic identities and real-time requirements of business networking applications. The aim is to improve welfare and security of a society, e.g. a "smart" city. Even though adaptive ICT systems technically enable everyone to participate both as service consumer and provider without running the required technical infrastructure by oneself, uncertain knowledge on enforcement of legal, business, and social requirements impedes taking advantage of adaptive ICT systems. Not only IT risks on confidentiality and accountability are undecidable due to lack of control with the current trust infrastructure but also IT risks on integrity and availability due to lack of transparency. Reasons are insufficient quantification of IT risk as well as unacceptable knowledge on cause-and-effect relationships and accountability. This work introduces adaptive identity management to improve control and transparency for a trustworthy spontaneous information exchange as the critical activity of adaptive ICT systems.
Keywords: Adaptive ICT System, Game Theory, IT Risk Management, Identity Management, Multilateral IT Security, Privacy, Resilience, Security (ID#: 15-5833)
URL: http://doi.acm.org/10.1145/2684200.2684363

 

Jin Dong, Seddik M. Djouadi, James J. Nutaro, Teja Kuruganti. “Secure Control Systems with Application to Cyber-Physical Systems.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 9-12. doi:10.1145/2602087.2602094
Abstract: Control systems are computer-based systems with networked units consisting of sensors, actuators, control processing units, and communication devices. The role of control system is to interact, monitor, and control physical processes. Reactive power control is a fundamental issue in ensuring the security of the power network. It is claimed that Synchronous Condensers (SC) have been used at both distribution and transmission voltage levels to improve stability and to maintain voltages within desired limits under changing load conditions and contingency situations. Performance of PI controller corresponding to various tripping faults are analyzed for SC systems. Most of the effort in protecting these systems has been in protection against random failures or reliability. However, besides failures these systems are subject to various signal attacks for which new analysis are discussed here. When a breach does occur, it is necessary to react in a time commensurate with the physical dynamics of the system as it responds to the attack. Failure to act swiftly enough may result in undesirable, and possibly irreversible, physical effects. Therefore, it is meaningful to evaluate the security of a cyber-physical system, especially to protect it from cyber-attack. Illustrative numerical examples are provided together with an application to the SC systems.
Keywords: SCADA systems, cyber-physical systems, secure control, security (ID#: 15-5834)
URL: http://doi.acm.org/10.1145/2602087.2602094

 

Andrei Costin, Aurélien Francillon. “Short Paper: A Dangerous ‘Pyrotechnic Composition’: Fireworks, Embedded Wireless and Insecurity-by-Design.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy In Wireless & Mobile Networks, July 2014, Pages 57-62. doi:10.1145/2627393.2627401
Abstract: Fireworks are used around the world to salute popular events such as festivals, weddings, and public or private celebrations. Besides their entertaining effects fireworks are essentially colored explosives which are sometimes directly used as weapons. Modern fireworks systems heavily rely on 'wireless pyrotechnic firing systems'. Those 'embedded cyber-physical systems' (ECPS) are able to remotely control pyrotechnic composition ignition. The failure to properly secure these computer sub-systems may have disastrous, if not deadly, consequences. They rely on standardized wireless communications, off the shelf embedded hardware and custom firmware. In this short paper, we describe our experience in discovering and exploiting a wireless firing system in a short amount of time without any prior knowledge of such systems. In summary, we demonstrate our methodology starting from analysis of firmware, the discovery of vulnerabilities and finally by demonstrating a real world attack. Finally, we stress that the security of pyrotechnic firing systems should be considered seriously, which could be achieved through improved safety compliance requirements and control.
Keywords: embedded, exploitation, firing systems, security, vulnerabilities, wireless (ID#: 15-5835)
URL: http://doi.acm.org/10.1145/2627393.2627401

 

Marco Balduzzi, Alessandro Pasta, Kyle Wilhoit. “A Security Evaluation of AIS Automated Identification System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 436-445. doi:10.1145/2664243.2664257
Abstract: AIS, Automatic Identification System, is an application of cyber-physical systems (CPS) to smart transportation at sea. Being primarily used for collision avoidance and traffic monitoring by ship captains and maritime authorities, AIS is a mandatory installation for over 300,000 vessels worldwide since 2002. Other promoted benefits are accident investigation, aids to navigation and search and rescue (SAR) operations. In this paper, we present a unique security evaluation of AIS, by introducing threats affecting both the implementation in online providers and the protocol specification. Using a novel software-based AIS transmitter that we designed, we show that our findings affect all transponders deployed globally on vessels and other maritime stations like lighthouses, buoys, AIS gateways, vessel traffic services and aircraft involved in SAR operations. Our concerns have been acknowledged by online providers and international standards organizations, and we are currently and actively working together to improve the overall security.
Keywords: (not provided) (ID#: 15-5836)
URL: http://doi.acm.org/10.1145/2664243.2664257

 

Shivam Bhasin, Jean-Luc Danger, Tarik Graba, Yves Mathieu, Daisuke Fujimoto, Makoto Nagata. “Physical Security Evaluation at an Early Design-Phase: A Side-Channel Aware Simulation Methodology.” ES4CPS '14 Proceedings of International Workshop on Engineering Simulations for Cyber-Physical Systems, March 2014, Pages 13. doi:10.1145/2559627.2559628
Abstract: Cyber-Physical Systems (CPS) are often deployed in critical domains like health, traffic management etc. Therefore security is one of the major driving factor in development of CPS. In this paper, we focus on cryptographic hardware embedded in CPS and propose a simulation methodology to evaluate the security of these cryptographic hardware cores. Designers are often concerned about attacks like Side-Channel Analysis (SCA) which target the physical implementation of cryptography to compromise its security. SCA considers the physical "leakage" of a well chosen intermediate variable correlated with the secret. Certain countermeasures can be deployed, like dual-rail logic or masking, to resist SCA. However to design an effective countermeasure or to fix the vulnerable sources in a circuit, it is of prime importance for a designer to know the main leaking sources in the device. In practice, security of a circuit is evaluated only after the chip is fabricated followed by a certification process. If the circuit has security concerns, it should pass through all the design phases right from RTL to fabrication which increases time-to-market. In such a scenario, it is very helpful if a designer can determine the vulnerabilities early in the design cycle and fix them. In this paper, we present an evaluation of different strategies to verify the SCA robustness of a cryptographic circuit at different design steps, from the RTL to the final layout. We compare evaluation based on digital and electrical simulations in terms of speed and accuracy in a side-channel context. We show that a low-level digital simulation can be fast and sufficiently accurate for side-channel analysis.
Keywords: Design-Time security Evaluation, Side-Channel Analysis (ID#: 15-5837)
URL: http://doi.acm.org/10.1145/2559627.2559628

 

Lujo Bauer, Florian Kerschbaum. “What are the Most Important Challenges for Access Control in New Computing Domains, such as Mobile, Cloud and Cyber-Physical Systems?” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 127-128. doi:10.1145/2613087.2613090
Abstract: We are seeing a significant shift in the types and characteristics of computing devices that are commonly used. Today, more smartphones are sold than personal computers. An area of rapid growth are also cloud systems; and our everyday lives are invaded by sensors like smart meters and electronic tickets. The days when most computing resources were managed directly by a computer's operating system are over—data and computation is distributed, and devices are typically always connected via the Internet. In light of this shift, it is important to revisit the basic security properties we desire of computing systems and the mechanisms that we use to provide them. A building block of most of the security we enjoy in today's systems is access control. This panel will examine the challenges we face in adapting the access control models, techniques, and tools produced thus far to today's and tomorrow's computing environments. Key characteristics of these new systems that may require our approach to access control to change is that in many (e.g., cloud) systems users do not directly control their data; that a vast population of users operating mobile and other new devices has very little education in their use; and that cyber-physical systems permeate our environment to the point where they are often invisible to their users. Access control comprises enforcement systems, specification languages, and policy-management tools or approaches. In each of these areas the shifting computing landscape leaves us examining how current technology can be applied to new contexts or looking for new technology to fill the gap. Enforcement of access-control policy based on a trusted operating system, for example, does not cleanly translate to massively distributed, heterogeneous computing environments; to environments with many devices that are minimally administered or administered with minimal expertise; and to potentially untrusted clouds that hold sensitive data and computations that belong to entities other than the cloud owner. What technologies or system components should be the building blocks of enforcement in these settings?
Keywords: access control, challenges, panel (ID#: 15-5838)
URL: http://doi.acm.org/10.1145/2613087.2613090

 

Mayur Naik. “Large-Scale Configurable Static Analysis.” SOAP '14 Proceedings of the 3rd ACM SIGPLAN International Workshop on the State of the Art in Java Program Analysis, June 2014, Pages 1-1. doi:10.1145/2614628.2614635
Abstract: Program analyses developed over the last three decades have demonstrated the ability to prove non-trivial properties of real-world programs. This ability in turn has applications to emerging software challenges in security, software-defined networking, cyber-physical systems, and beyond. The diversity of such applications necessitates adapting the underlying program analyses to client needs, in aspects of scalability, applicability, and accuracy. Today's program analyses, however, do not provide useful tuning knobs. This talk presents a general computer-assisted approach to effectively adapt program analyses to diverse clients. The approach has three key ingredients. First, it poses optimization problems that expose a large set of choices to adapt various aspects of an analysis, such as its cost, the accuracy of its result, and the assumptions it makes about missing information. Second, it solves those optimization problems by new search algorithms that efficiently navigate large search spaces, reason in the presence of noise, interact with users, and learn across programs. Third, it comprises a program analysis platform that facilitates users to specify and compose analyses, enables search algorithms to reason about analyses, and allows using large-scale computing resources to parallelize analyses.
Keywords: (not provided) (ID#: 15-5839)
URL: http://doi.acm.org/10.1145/2614628.2614635

 

Anis Ben Aissa, Latifa Ben Arfa Rabai, Robert K. Abercrombie, Ali Mili, Frederick T. Sheldon. “Quantifying Availability in SCADA Environments Using the Cyber Security Metric MFC.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 81-84. doi:10.1145/2602087.2602103
Abstract: Supervisory Control and Data Acquisition (SCADA) systems are distributed networks dispersed over large geographic areas that aim to monitor and control industrial processes from remote areas and/or a centralized location. They are used in the management of critical infrastructures such as electric power generation, transmission and distribution, water and sewage, manufacturing/industrial manufacturing as well as oil and gas production. The availability of SCADA systems is tantamount to assuring safety, security and profitability. SCADA systems are the backbone of the national cyber-physical critical infrastructure. Herein, we explore the definition and quantification of an econometric measure of availability, as it applies to SCADA systems; our metric is a specialization of the generic measure of mean failure cost.
Keywords: MFC, SCADA, availability, dependability, security measures, security requirements, threats (ID#: 15-5840)
URL:  http://doi.acm.org/10.1145/2602087.2602103

 

Teklemariam Tsegay Tesfay, Jean-Pierre Hubaux, Jean-Yves Le Boudec, Philippe Oechslin. “Cyber-Secure Communication Architecture for Active Power Distribution Networks. SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 545-552. doi:10.1145/2554850.2555082
Abstract: Active power distribution networks require sophisticated monitoring and control strategies for efficient energy management and automatic adaptive reconfiguration of the power infrastructure. Such requirements are realised by deploying a large number of various electronic automation and communication field devices, such as Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IEDs), and a reliable two-way communication infrastructure that facilitates transfer of sensor data and control signals. In this paper, we perform a detailed threat analysis in a typical active distribution network's automation system. We also propose mechanisms by which we can design a secure and reliable communication network for an active distribution network that is resilient to insider and outsider malicious attacks, natural disasters, and other unintended failure. The proposed security solution also guarantees that an attacker is not able to install a rogue field device by exploiting an emergency situation during islanding.
Keywords: PKI, active distribution network, authentication, islanding, smart grid, smart grid security, unauthorised access (ID#: 15-5841)
URL: http://doi.acm.org/10.1145/2554850.2555082

 

Mahdi Azimi, Ashkan Sami, Abdullah Khalili. “A Security Test-Bed for Industrial Control Systems.” MoSEMInA 2014 Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, May 2014, Pages 26-31. doi:10.1145/2593783.2593790
Abstract: Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.
Keywords: Dynamic Test, Industrial Control Systems, Network Test, Security, Static Test, Test-bed (ID#: 15-5842)
URL: http://doi.acm.org/10.1145/2593783.2593790

 

Bogdan D. Czejdo, Michael D. Iannacone, Robert A. Bridges, Erik M. Ferragut, John R. Goodall. “Integration of External Data Sources with Cyber Security Data Warehouse.”  CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 49-52. doi:10.1145/2602087.2602098
Abstract: In this paper we discuss problems related to integration of external knowledge and data components with a cyber security data warehouse to improve situational understanding of enterprise networks. More specifically, network assessment and trend analysis can be enhanced by knowledge about most current vulnerabilities and external network events. The cyber security data warehouse can be modeled as a hierarchical graph of aggregations that captures data at multiple scales. Nodes of the graph, which are summarization tables, can be linked to external sources of information. We discuss problems related to timely information about vulnerabilities and how to integrate vulnerability ontology with cyber security network data.
Keywords: aggregation, anomaly detection, cyber security, natural language processing, network intrusion, situational understanding, vulnerability, vulnerability ontology (ID#: 15-5843)
URL: http://doi.acm.org/10.1145/2602087.2602098

 

Dina Hadžiosmanović, Robin Sommer, Emmanuele Zambon, Pieter H. Hartel. “Through the Eye of the PLC: Semantic Security Monitoring for Industrial Processes.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 126-135. doi:10.1145/2664243.2664277
Abstract: Off-the-shelf intrusion detection systems prove an ill fit for protecting industrial control systems, as they do not take their process semantics into account. Specifically, current systems fail to detect recent process control attacks that manifest as unauthorized changes to the configuration of a plant's programmable logic controllers (PLCs). In this work we present a detector that continuously tracks updates to corresponding process variables to then derive variable-specific prediction models as the basis for assessing future activity. Taking a specification-agnostic approach, we passively monitor plant activity by extracting variable updates from the devices' network communication. We evaluate the capabilities of our detection approach with traffic recorded at two operational water treatment plants serving a total of about one million people in two urban areas. We show that the proposed approach can detect direct attacks on process control, and we further explore its potential to identify more sophisticated indirect attacks on field device measurements as well.
Keywords: (not provided) (ID#: 15-5844)
URL: http://doi.acm.org/10.1145/2664243.2664277

 

Ting Liu, Yuhong Gui, Yanan Sun, Yang Liu, Yao Sun, Feng Xiao. “SEDE: State Estimation-Based Dynamic Encryption Scheme for Smart Grid Communication.” SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 539-544. doi:10.1145/2554850.2555033
Abstract: The vision of smart grid relies heavily on the communication technologies as they provide a desirable infrastructure for real-time measurement, transmission, decision and control. But various attacks such as eavesdropping, information tampering and malicious control command injection that are hampering the communication in Internet, would impose great threat on the security and stability of smart grids. In this paper, a State Estimation-based Dynamic Encryption (SEDE) scheme is proposed to secure the communication in smart grid. Several states of power system are employed as the common secrets to generate a symmetric key at both sides, which are measured on the terminals and calculated on the control center using state estimation. The advantages of SEDE are 1) the common secrets, used to generate symmetric key, are never exchanged in the network due to the state estimation, that observably improves the security of SEDE; 2) the measurement and state estimation are the essential functions on the terminals and control center in power system; 3) the functions, applied to encrypt and decrypt data, are simple and easy-implemented, such as XOR, Hash, rounding, etc. Thus, SEDE is considered as an inherent, light-weight and high-security encryption scheme for smart gird. In the experiments, SEDE is simulated on a 4-bus power system to demonstrate the process of state estimation, key generation and error correction.
Keywords: dynamic encryption, security, smart grid, state estimation (ID#: 15-5845)
URL: http://doi.acm.org/10.1145/2554850.2555033

 

Amel Bennaceur, Arosha K. Bandara, Michael Jackson, Wei Liu, Lionel Montrieux, Thein Than Tun, Yijun Yu, Bashar Nuseibeh. “Requirements-Driven Mediation for Collaborative Security.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 37-42. doi:10.1145/2593929.2593938
Abstract: Security is concerned with the protection of assets from intentional harm. Secure systems provide capabilities that enable such protection to satisfy some security requirements. In a world increasingly populated with mobile and ubiquitous computing technology, the scope and boundary of security systems can be uncertain and can change. A single functional component, or even multiple components individually, are often insufficient to satisfy complex security requirements on their own.  Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. Collaborative security, which we propose in this paper, aims to exploit the selection and deployment of multiple, potentially heterogeneous, software-intensive components to collaborate in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities. However, the components that need to collaborate may not have been designed and implemented to interact with one another collaboratively. To address this, we propose a novel framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components that may be needed in order to achieve collaborative security. We elaborate on each of these framework elements, focusing in particular on the challenges and opportunities afforded by (1) the ability to capture, represent, and reason about the capabilities of different software components and their operational context, and (2) the ability of components to be selected and mediated at runtime in order to satisfy the security requirements. We illustrate our vision through a collaborative robotic implementation, and suggest some areas for future work.
Keywords: Security requirements, collaborative adaptation, mediation (ID#: 15-5846)
URL: http://doi.acm.org/10.1145/2593929.2593938

 

Liliana Pasquale, Carlo Ghezzi, Claudio Menghi, Christos Tsigkanos, Bashar Nuseibeh. “Topology Aware Adaptive Security.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 43-48. doi:10.1145/2593929.2593939
Abstract: Adaptive security systems aim to protect valuable assets in the face of changes in their operational environment. They do so by monitoring and analysing this environment, and deploying security functions that satisfy some protection (security, privacy, or forensic) requirements. In this paper, we suggest that a key characteristic for engineering adaptive security is the topology of the operational environment, which represents a physical and/or a digital space - including its structural relationships, such as containment, proximity, and reachability. For adaptive security, topology expresses a rich representation of context that can provide a system with both structural and semantic awareness of important contextual characteristics. These include the location of assets being protected or the proximity of potentially threatening agents that might harm them. Security-related actions, such as the physical movement of an actor from a room to another in a building, may be viewed as topological changes. The detection of a possible undesired topological change (such as an actor possessing a safe’s key entering the room where the safe is located) may lead to the decision to deploy a particular security control to protect the relevant asset. This position paper advocates topology awareness for more effective engineering of adaptive security. By monitoring changes in topology at runtime one can identify new or changing threats and attacks, and deploy adequate security controls accordingly. The paper elaborates on the notion of topology and provides a vision and research agenda on its role for systematically engineering adaptive security systems.
Keywords: Topology, adaptation, digital forensics, privacy, security (ID#: 15-5847)
URL: http://doi.acm.org/10.1145/2593929.2593939

 

Steven D. Fraser, Djenana Campara, Michael C. Fanning, Gary McGraw, Kevin Sullivan. “Privacy and Security in a Networked World.” SPLASH '14 Proceedings of the companion publication of the 2014 ACM SIGPLAN conference on Systems, Programming, and Applications: Software for Humanity, October 2014, Pages 43-45. doi:10.1145/2660252.2661294
Abstract: As news stories continue to demonstrate, ensuring adequate security and privacy in a networked "always on" world is a challenge; and while open source software can mitigate problems, it is not a panacea. This panel will bring together experts from industry and academia to debate, discuss, and offer opinions -- questions might include: What are the "costs" of "good enough" security and privacy on developers and customers?  What is the appropriate trade-off between the price provide security and cost of poor security? How can the consequences of poor design and implementation be managed?  Can systems be enabled to fail "security-safe"?  What are the trade-offs for increased adoption of privacy and security best practices?  How can the "costs" of privacy and security -- both tangible and intangible -- be reduced?
Keywords: cost, design, privacy, security, soft issues (ID#: 15-5848)
URL: http://doi.acm.org/10.1145/2660252.2661294

 

Qi Zhu, Peng Deng. “Design Synthesis and Optimization for Automotive Embedded Systems.” ISPD '14 Proceedings of the 2014 on International Symposium on Physical Design, March 2014, Pages 141-148. doi:10.1145/2560519.2565873
Abstract: Embedded software and electronics are major contributors of values in vehicles, and play a dominant role in vehicle innovations. The design of automotive embedded systems has become more and more challenging, with the rapid increase of system complexity and more requirements on various design objectives. Methodologies such as model-based design are being adopted to improve design quality and productivity through the usage of functional models. However, there is still a significant lack of design automation tools, in particular synthesis and optimization tools, that can turn complex functional specifications to correct and optimal software implementations on distributed embedded platforms. In this paper, we discuss some of the major technical challenges and the problems to be solved in automotive embedded systems design, especially for the synthesis and optimization of embedded software.
Keywords: automotive embedded systems, design automation, software synthesis and optimization (ID#: 15-5849)
URL: http://doi.acm.org/10.1145/2560519.2565873

 

Chen Liu, Chengmo Yang, Yuanqi Shen. “Leveraging Microarchitectural Side Channel Information to Efficiently Enhance Program Control Flow Integrity.” CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 5. doi:10.1145/2656075.2656092
Abstract: Stack buffer overflow is a serious security threat to program execution. A malicious attacker may overwrite the return address of a procedure to alter its control flow and hence change its functionality. While a number of hardware and/or software based protection schemes have been developed, these countermeasures introduce sizable overhead in performance and energy, thus limiting their applicability to embedded systems. To reduce such overhead, our goal is to develop a low-cost scheme to "filter out" potential stack buffer overflow attacks. Our observation is that attacks to control flow will trigger certain microarchitectural events, such as mis-predictions in the return address stack or misses in the instruction cache. We therefore propose a hardware-based scheme to monitor these events. Only upon detecting any suspicious behavior, a more precise but costly diagnosis scheme will be invoked to thoroughly check control flow integrity. Meanwhile, to further reduce the rate of false positives of the security filter, we propose three enhancements to the return address stack, instruction prefetch engine and instruction cache, respectively. The results show that these enhancements effectively reduce more than 95% of false positives with almost no false negatives introduced.
Keywords: instruction cache, return address stack, security, stack buffer overflow (ID#: 15-5850)
URL: http://doi.acm.org/10.1145/2656075.2656092

 

Jakob Axelsson, Avenir Kobetski. “Architectural Concepts for Federated Embedded Systems.” ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops, August 2014, Article No. 25. doi:10.1145/2642803.2647716
Abstract: Federated embedded systems (FES) is an approach for systems-of-systems engineering in the domain of cyber-physical systems. It is based on the idea to allow dynamic addition of plug-in software in the embedded system of a product, and through communication between the plug-ins in different products, it becomes possible to build services on the level of a federation of products. In this paper, architectural concerns for FES are elicited, and are used as rationale for a number of decisions in the architecture of products that are enabled for FES, as well as in the application architecture of a federation. A concrete implementation of a FES from the automotive domain is also described, as a validation of the architectural concepts presented.
Keywords: Systems-of-systems, cyber-physical systems, federated embedded systems, system architecture (ID#: 15-5851)
URL: http://doi.acm.org/10.1145/2642803.2647716

 

Jurgo Preden. “Generating Situation Awareness in Cyber-Physical Systems: Creation and Exchange of Situational Information.” CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 21. doi:10.1145/2656075.2661647
Abstract: Cyber-physical systems depend on good situation awareness in order to cope with the changes of the physical world and in the configuration of the system to fulfill their goal functions. Being aware of the situation in the physical world enables a cyber-physical system to adapt its behaviour according to the actual state of the world as perceived by the cyber-physical system. Understanding the situation of the cyber-physical system itself enables adaptation of the behaviour of the system according to the current capabilities and state of the system, e.g., providing less features or features with limited functionality in case some of the system components are not functional. In order to build resilient cyber-physical systems we need to build systems that are able to consider both of these aspects in their operation.
Keywords: cyber physical system, situation awareness (ID#: 15-5852)
URL: http://doi.acm.org/10.1145/2656075.2661647

 

Kaliappa Ravindran, Ramesh Sethu. “Model-Based Design of Cyber-Physical Software Systems for Smart Worlds: A Software Engineering Perspective.” MoSEMInA 2014 Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, May 2014, Pages 62-71. doi:10.1145/2593783.2593785
Abstract: The paper discusses the design of cyber-physical systems software around intelligent physical worlds (IPW). An IPW is the embodiment of control software functions wrapped around the external world processes, exhibiting self-adaptive behavior over a limited operating region of the system. This is in contrast with the traditional models where the physical world is basically dumb. A self-adaptation of IPW is feasible when certain system properties hold: function separability and piece-wise linearity of system behavioral models. The IPW interacts with an intelligent computational world (ICW) to work over wide range of operating conditions, by patching itself with suitable control parameters and rules & procedures relevant to a changed condition. The modular decomposition of a complex adaptive system into IPW and ICW has many advantages: lowering overall software complexity, simplifying system verification, and supporting easier evolution of system features. The paper illuminates our concept of IPW with software engineering-oriented case study of an industrial application: automotive system.
Keywords: Cyber-physical system, Hierarchical control, Self-managing system, Software module reuse, System feature evolution (ID#: 15-5853)
URL: http://doi.acm.org/10.1145/2593783.2593785

 

Nikola Trcka, Mark Moulin, Shaunak Bopardikar, Alberto Speranzon. “A Formal Verification Approach to Revealing Stealth Attacks on Networked Control Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 67-76. doi:10.1145/2566468.2566484
Abstract: We develop methods to determine if networked control systems can be compromised by stealth attacks, and derive design strategies to secure these systems. A stealth attack is a form of a cyber-physical attack where the adversary compromises the information between the plant and the controller, with the intention to drive the system into a bad state and at the same time stay undetected. We define the discovery problem as a formal verification problem, where generated counterexamples (if any) correspond to actual attack vectors. The analysis is entirely performed in Simulink, using Simulink Design Verifier as the verification engine. A small case study is presented to illustrate the results, and a branch-and-bound algorithm is proposed to perform optimal system securing.
Keywords: control system, cyber-physical security, formal verification (ID#: 15-5854)
URL: http://doi.acm.org/10.1145/2566468.2566484

 

Jakub Szefer, Pramod Jamkhedkar, Diego Perez-Botero, Ruby B. Lee. “Cyber Defenses for Physical Attacks and Insider Threats in Cloud Computing.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 519-524. doi:10.1145/2590296.2590310
Abstract: In cloud computing, most of the computations and data in the data center do not belong to the cloud provider. This leaves owners of applications and data concerned about cyber and physical attacks which may compromise the confidentiality, integrity or availability of their applications or data. While much work has looked at protection from software (cyber) threats, very few have looked at physical attacks and physical security in data centers. In this work, we present a novel set of cyber defense strategies for physical attacks in data centers. We capitalize on the fact that physical attackers are constrained by the physical layout and other features of a data center which provide a time delay before an attacker can reach a server to launch a physical attack, even by an insider. We describe how a number of cyber defense strategies can be activated when an attack is detected, some of which can even take effect before the actual attack occurs. The defense strategies provide improved security and are more cost-effective than always-on protections in the light of the fact that on average physical attacks will not happen often -- but can be very damaging when they do occur.
Keywords: cloning, cloud computing, data center security, insider threats, migration, physical attacks (ID#: 15-5855)
URL: http://doi.acm.org/10.1145/2590296.2590310 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Cyber-Physical System Security and Privacy, 2014, Part 2

 

 
SoS Logo

Cyber-Physical System

Security and Privacy, 2014

Part 2


Cyber-Physical systems generally are systems where computers control physical entities. They exist in areas as diverse as automobiles, manufacturing, energy, transportation, chemistry, and computer appliances. In this bibliography, the primary focus of published research is in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources. Because of its strategic importance and the consequences of intrusion, smart grid is of particular importance to the Science of Security. The work presented here was published in 2014.


Francisco Javier Acosta Padilla, Frederic Weis, Johann Bourcier. “Towards a Model@Runtime Middleware for Cyber Physical Systems.” MW4NG '14 Proceedings of the 9th Workshop on Middleware for Next Generation Internet Computing, December 2014, Article No. 6. doi:10.1145/2676733.2676741
Abstract: Cyber Physical Systems (CPS) or Internet of Things systems are typically formed by a myriad of many small interconnected devices. This underlying hardware infrastructure raises new challenges in the way we administrate the software layer of these systems. Indeed, the limited computing power and battery life of each node combined with the very distributed nature of these systems, greatly adds complexity to distributed software layer management. In this paper we propose a new middleware dedicated to CPS to enable the management of software deployment and the dynamic reconfiguration of these systems. Our middleware is inspired from the Component Based Systems and the model@runtime paradigm which has been adapted to the context of Cyber Physical Systems. We have conducted an initial evaluation on a typical Cyber Physical Systems hardware infrastructure which demonstrates the feasibility of providing a model@runtime middleware for these systems.
Keywords: MDE, adaptive systems, cyber physical systems, middleware, models (ID#: 15-5856)
URL:  http://doi.acm.org/10.1145/2676733.2676741 

 

Mohammad Ashiqur Rahman, Ehab Al-Shaer, Rakesh B. Bobba. “Moving Target Defense for Hardening the Security of the Power System State Estimation.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 59-68. doi:10.1145/2663474.2663482
Abstract: State estimation plays a critically important role in ensuring the secure and reliable operation of the electric grid. Recent works have shown that the state estimation process is vulnerable to stealthy attacks where an adversary can alter certain measurements to corrupt the solution of the process, but evade the existing bad data detection algorithms and remain invisible to the system operator. Since the state estimation result is used to compute optimal power flow and perform contingency analysis, incorrect estimation can undermine economic and secure system operation. However, an adversary needs sufficient resources as well as necessary knowledge to achieve a desired attack outcome. The knowledge that is required to launch an attack mainly includes the measurements considered in state estimation, the connectivity among the buses, and the power line admittances. Uncertainty in information limits the potential attack space for an attacker. This advantage of uncertainty enables us to apply moving target defense (MTD) strategies for developing a proactive defense mechanism for state estimation. In this paper, we propose an MTD mechanism for securing state estimation, which has several characteristics: (i) increase the knowledge uncertainty for attackers, (ii) reduce the window of attack opportunity, and (iii) increase the attack cost. In this mechanism, we apply controlled randomization on the power grid system properties, mainly on the set of measurements that are considered in state estimation, and the topology, especially the line admittances. We thoroughly analyze the performance of the proposed mechanism on the standard IEEE 14- and 30-bus test systems.
Keywords: false data injection attack, moving target defense, power grid, state estimation (ID#: 15-5857)
URL:  http://doi.acm.org/10.1145/2663474.2663482

 

Fahad Javed, Usman Ali, Muhammad Nabeel, Qasim Khalid, Naveed Arshad, Jahangir Ikram. “SmartDSM: A Layered Model for Development of Demand Side Management in Smart Grids.” SE4SG 2014 Proceedings of the 3rd International Workshop on Software Engineering Challenges for the Smart Grid, June 2014, Pages 15-20. doi:10.1145/2593845.2593848
Abstract: Growing power demand and carbon emissions is motivating utility providers to introduce smart power systems. One of the most promising technology to deliver cheaper and smarter electricity is demand side management. A DSM solution controls the devices at user premises in order to achieve overall goals of lower cost for consumer and utility. To achieve this various technologies from different domains come in to play from power electronics to sensor networks to machine learning and distributed systems design. The eventual system is a large, distributed software system over a heterogeneous environment and systems. Whereas various algorithms to plan the DSM schedule have been proposed, no concerted effort has been made to propose models and architectures to develop such a complex software system. This lack of models provides for a haphazard landscape for researchers and practitioners leading to confused requirements and overlapping concerns of domains. This was observed by the authors in developing a DSM system for their lab and faculty housing. To this end in this paper we present a model to develop software systems to deliver DSM. In addition to the model, we present a road map of software engineering research to aid development of future DSM systems. This is based on our observations and insights of the developed DSM systems.
Keywords: Smart grids, demand side management, model driven design, software engineering (ID#: 15-5858)
URL: http://doi.acm.org/10.1145/2593845.2593848

 

Rafael Oliveira Vasconcelos, Igor Vasconcelos, Markus Endler. “A Middleware for Managing Dynamic Software Adaptation.” ARM '14 Proceedings of the 13th Workshop on Adaptive and Reflective Middleware, December 2014, Article No. 5. doi:10.1145/2677017.2677022
Abstract: The design and development of adaptive systems brings new challenges since the dynamism of such systems is a multifaceted concern that range from mechanisms to enable the adaptation on the software level to the (self-) management of the entire system using adaptation plans or system administrator, for instance. Networked and mobile embedded systems are examples of systems where dynamic adaptation become even more necessary as the applications must be capable of discovering the computing resources in their near environment. While most of the current research is concerned with low-level adaptation techniques (i.e., how to dynamically deploy new components or change parameters), we are focused in providing management of distributed dynamic adaptation and facilitating the development of adaptation plans. In this paper, we present a middleware tailored for mobile embedded systems that supports distributed dynamic software adaptation, in transactional and non-transactional fashion, among mobile devices. We also present results of initial evaluation.
Keywords: adaptability, dynamic adaptation, middleware, mobile communication, self-adaptive systems (ID#: 15-5859)
URL: http://doi.acm.org/10.1145/2677017.2677022

 

Wei Gong, Yunhao Liu, Amiya Nayak, Cheng Wang. “Wise Counting: Fast and Efficient Batch Authentication for Large-Scale RFID Systems.” MobiHoc '14 Proceedings of the 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing, August 2014, Pages 347-356. doi:10.1145/2632951.2632963
Abstract: Radio Frequency Identification technology (RFID) is widely used in many applications, such as asset monitoring, e-passport and electronic payment, and is becoming one of the most effective solutions in cyber physical system. Since the identification alone does not provide any guarantee that tag corresponds to genuine identity, authentication of tag information is needed in most RFID systems. Meanwhile, as the number of tags is rapidly growing in recent years, per-tag based methods suffer from severely low efficiency and thus give way to probabilistic batch authentication. Most previous methods, however, share a common drawback from statistical perspective: they fail to explore correlation information, i.e., they do not comprehensively utilize all the information in authentication data structures. In addition, those schemes are not scalable well when multiple tag sets need to be verified simultaneously. In this paper, we propose a fast and efficient batch authentication scheme, Wise Counting (WIC), for large-scale RFID systems. We are the first to formally introduce the general batch authentication problem with multiple tag sets and give counterfeits estimation scheme with high efficiency. By employing a novel hierarchical authentication structure, we show that WIC is able to fast and efficiently authenticate both a single tag set and multiple tag sets in an easy, intuitive way. Through detailed theoretical analysis and extensive simulations, we validate the design of WIC and demonstrate its large superiority over state-of-the art approaches.
Keywords: RFID tags, batch authentication, counterfeits estimation, hierarchical data structure (ID#: 15-5860)
URL: http://doi.acm.org/10.1145/2632951.2632963

 

Ze Ni, Avenir Kobetski, Jakob Axelsson. “Design and Implementation of a Dynamic Component Model for Federated AUTOSAR Systems.”  DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. doi:10.1145/2593069.2593121
Abstract: The automotive industry has recently agreed upon the embedded software standard AUTOSAR, which structures an application into reusable components that can be deployed using a configuration scheme. However, this configuration takes place at design time, with no provision for dynamically installing components to reconfigure the system. In this paper, we present the design and implementation of a dynamic component model that extends AUTOSAR with the possibility to add plug-in components at runtime. This opens up for shorter deployment time for new functions; opportunities for vehicles to participate in federated embedded systems; and involvement of third-party software developers.
Keywords: AUTOSAR, Dynamically Reconfigurable Software, Federated Embedded Systems, Software Components (ID#: 15-5861)
URL: http://doi.acm.org/10.1145/2593069.2593121

 

Stefan Wagner. “Scrum for Cyber-Physical Systems: A Process Proposal.” RCoSE 2014 Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering, June 2014, Pages 51-56. doi:10.1145/2593812.2593819
Abstract: Agile development processes and especially Scrum are changing the state of the practice in software development. Many companies in the classical IT sector have adopted them to successfully tackle various challenges from the rapidly changing environments and increasingly complex software systems. Companies developing software for embedded or cyber-physical systems, however, are still hesitant to adopt such processes. Despite successful applications of Scrum and other agile methods for cyber-physical systems, there is still no complete process that maps their specific challenges to practices in Scrum. We propose to fill this gap by treating all design artefacts in such a development in the same way: In software development, the final design is already the product, in hardware and mechanics it is the starting point of production. We sketch the Scrum extension Scrum CPS by showing how Scrum could be used to develop all design artefacts for a cyber physical system. Hardware and mechanical parts that might not be available yet are simulated. With this approach, we can directly and iteratively build the final software and produce detailed models for the hardware and mechanics production in parallel. We plan to further detail Scrum CPS and apply it first in a series of student projects to gather more experience before testing it in an industrial case study.
Keywords: Agile, Cyber-physical, Scrum (ID#: 15-5862)
URL:  http://doi.acm.org/10.1145/2593812.2593819

 

Kasper Luckow, Corina S. Păsăreanu, Matthew B. Dwyer, Antonio Filieri, Willem Visser. “Exact and Approximate Probabilistic Symbolic Execution for Nondeterministic Programs.” ASE '14 Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, September 2014, Pages 575-586. doi:10.1145/2642937.2643011
Abstract: Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of-the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Keywords: nondeterministic programs, probabilistic software analysis, symbolic execution (ID#: 15-5863)
URL:  http://doi.acm.org/10.1145/2642937.2643011

 

Philipp Diebold, Constanza Lampasona, Sergey Zverlov, Sebastian Voss. “Practitioners' and Researchers' Expectations on Design Space Exploration for Multicore Systems in the Automotive and Avionics Domains: A Survey.” EASE '14 Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, May 2014, Article No. 1. doi:10.1145/2601248.2601250
Abstract: Background: The mobility domains are moving towards the adoption of multicore technology. Appropriate methods, techniques, and tools need to be developed or adapted in order to fulfill the existing requirements. This is a case for design space exploration methods and tools. Objective: Our goal was to understand the importance of different design space exploration goals with respect to their relevance, frequency of use, and tool support required in the development of multicore systems from the point of view of the ARAMiS project members. Our aim was to use the results to guide further work in the project. Method: We conducted a survey regarding the current state of the art in design space exploration in industry and research and collected the expectations of project members regarding design space exploration goals. Results: The results show that design space exploration is an important topic in industry as well as in research. It is used very often with different important goals to optimize the system. Conclusions: Current tools provide only partial solutions for design space exploration. Our results can be used for improving them and guiding their development according to the priorities explained in this contribution.
Keywords: automotive, avionics, design space exploration, industry, multicore, research, survey (ID#: 15-5864)
URL:  http://doi.acm.org/10.1145/2601248.2601250

 

Sandeep Neema, Gabor Simko, Tihamer Levendovszky, Joseph Porter, Akshay Agrawal, Janos Sztipanovits. “Formalization of Software Models for Cyber-Physical Systems.” FormaliSE 2014 Proceedings of the 2nd FME Workshop on Formal Methods in Software Engineering, June 2014, Pages 45-51. doi:10.1145/2593489.2593495
Abstract: The involvement of formal methods is indispensable for modern software engineering. This especially holds for Cyber-Physical Systems (CPS). In order to deal with the complexity and heterogeneity of the design, model-based engineering is widely used. The complexity of detailed verification in the final source code makes it imperative to introduce formal methods earlier in the design process. Because of the widespread use of customized modeling languages (domain-specific modeling languages, DSMLs), it is crucial to formally specify the DSML, and verify if the model meets fundamental correctness criteria. This is achieved by specifying behavioral and structural semantics of the modeling language. Significant model-driven tools have emerged incorporating advanced model checking methods that can provide some assurance regarding the quality and correctness of the models. However, the code generated from these models, using auto code generators remains circumspect, since the correctness of the code generators cannot be assumed as a given, and remains intractable to prove. Therefore, we propose a pragmatic approach, instead of verifying explicit implementation of code generator, verifies the correctness of the generated code with respect to a specific set of user-defined properties to establish that the code-generators are property-preserving. In order to make the verification workflow conducive to domain engineers, who are not often trained in formal methods, we include a mechanism for high-level specification of temporal properties using pattern-based verification templates. The presented toolchain leverages state-of-the-art verification tools, and a small case-study illustrates the approach.
Keywords: Cyber-Physical Systems, Model-Integrated Computing, Semantic Specification (ID#: 15-5865)
URL:  http://doi.acm.org/10.1145/2593489.2593495

 

Ivan Ruchkin, Dionisio De Niz, David Garlan, Sagar Chaki. “Contract-Based Integration of Cyber-Physical Analyses.” EMSOFT '14 Proceedings of the 14th International Conference on Embedded Software, October 2014, Article No. 23. doi:10.1145/2656045.2656052
Abstract: Developing cyber-physical systems involves multiple engineering domains, e.g., timing, logical correctness, thermal resilience, and mechanical stress. In today's industrial practice, these domains rely on multiple analyses to obtain and verify critical system properties. Domain differences make the analyses abstract away interactions among themselves, potentially invalidating the results. Specifically, one challenge is to ensure that an analysis is never applied to a model that violates the assumptions of the analysis. Since such violation can originate from the updating of the model by another analysis, analyses must be executed in the correct order. Another challenge is to apply diverse analyses soundly and scalably over models of realistic complexity. To address these challenges, we develop an analysis integration approach that uses contracts to specify dependencies between analyses, determine their correct orders of application, and specify and verify applicability conditions in multiple domains. We implement our approach and demonstrate its effectiveness, scalability, and extensibility through a verification case study for thread and battery cell scheduling.
Keywords: analysis, analysis contracts, battery scheduling, cyber-physical systems, model checking, real-time scheduling, thermal runaway, virtual integration (ID#: 15-5866)
URL: http://doi.acm.org/10.1145/2656045.2656052

 

Tomas Bures, Petr Hnetynka, Frantisek Plasil. “Strengthening Architectures of Smart CPS by Modeling Them as Runtime Product-Lines.” CBSE '14 Proceedings of the 17th International ACM Sigsoft Symposium on Component-Based Software Engineering, June 2014, Pages 91-96. doi:10.1145/2602458.2602478
Abstract: Smart Cyber-Physical Systems (CPS) are complex distributed decentralized systems of cooperating mobile and stationary devices which closely interact with the physical environment. Although Component-Based Development (CBD) might seem as a viable solution to target the complexity of smart CPS, existing component models scarcely cope with the open-ended and very dynamic nature of smart CPS. This is especially true for design-time modeling using hierarchical explicit architectures, which traditionally provide an excellent means of coping with complexity by providing multiple levels of abstractions and explicitly specifying communication links between component instances. In this paper we propose a modeling method (materialized in the SOFA NG component model) which conveys the benefits of explicit architectures of hierarchical components to the design of smart CPS. Specifically, we base our method on modeling systems as reference architectures of Software Product Lines (SPL). Contrary to traditional SPL, which is a fully design-time approach, we create SPL configurations at runtime. We do so in a decentralized way by translating the configuration process to the process of establishing component ensembles (i.e. dynamic cooperation groups of components) of our DEECo component model.
Keywords: component model, component-based development, cyber-physical systems, software architecture, software components (ID#: 15-5867)
URL: http://doi.acm.org/10.1145/2602458.2602478

 

Ashish Tiwari, Bruno Dutertre, Dejan Jovanović, Thomas de Candia, Patrick D. Lincoln, John Rushby, Dorsa Sadigh, Sanjit Seshia. “Safety Envelope for Security.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 85-94. doi:10.1145/2566468.2566483
Abstract: We present an approach for detecting sensor spoofing attacks on a cyber-physical system. Our approach consists of two steps. In the first step, we construct a safety envelope of the system. Under nominal conditions (that is, when there are no attacks), the system always stays inside its safety envelope. In the second step, we build an attack detector: a monitor that executes synchronously with the system and raises an alarm whenever the system state falls outside the safety envelope. We synthesize safety envelopes using a modifed machine learning procedure applied on data collected from the system when it is not under attack. We present experimental results that show effectiveness of our approach, and also validate the several novel features that we introduced in our learning procedure.
Keywords: hybrid systems, invariants, safety envelopes, security (ID#: 15-5868)
URL: http://doi.acm.org/10.1145/2566468.2566483

 

Zhi Li, Lu Chen. “System-Level Testing of Cyber-Physical Systems Based on Problem Concerns.” EAST 2014 Proceedings of the 2014 3rd International Workshop on Evidential Assessment of Software Technologies, May 2014, Pages 60-62. doi:10.1145/2627508.2627511
Abstract: In this paper we propose a problem-oriented approach to system-level testing of cyber-physical systems based on Jackson’s notion of problem concerns. Some close associations between problem concerns and potential faults in the problem space are made, which necessitates system-level testing. Finally, a research agenda has been put forward with the goal of building a repository of system faults and mining particular problem concerns for system-level testing.
Keywords: Problem Frames, problem concerns, system-level testing (ID#: 15-5869)
URL:  http://doi.acm.org/10.1145/2627508.2627511

 

Carlos Barreto, Alvaro A. Cárdenas, Nicanor Quijano, Eduardo Mojica-Nava. “CPS: Market Analysis of Attacks Against Demand Response in the Smart Grid.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 136-145. doi:10.1145/2664243.2664284
Abstract: Demand response systems assume an electricity retail-market with strategic electricity consuming agents. The goal in these systems is to design load shaping mechanisms to achieve efficiency of resources and customer satisfaction. Recent research efforts have studied the impact of integrity attacks in simplified versions of the demand response problem, where neither the load consuming agents nor the adversary are strategic. In this paper, we study the impact of integrity attacks considering strategic players (a social planner or a consumer) and a strategic attacker. We identify two types of attackers: (1) a malicious attacker who wants to damage the equipment in the power grid by producing sudden overloads, and (2) a selfish attacker that wants to defraud the system by compromising and then manipulating control (load shaping) signals. We then explore the resiliency of two different demand response systems to these fraudsters and malicious attackers. Our results provide guidelines for system operators deciding which type of demand-response system they want to implement, how to secure them, and directions for detecting these attacks.
Keywords: (not provided) (ID#: 15-5870)
URL:  http://doi.acm.org/10.1145/2664243.2664284

 

Bader Alwasel, Stephen D. Wolthusen. “Reconstruction of Structural Controllability over Erdős-Rényi Graphs via Power Dominating Sets.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 57-60. doi:10.1145/2602087.2602095
Abstract: Controllability, or informally the ability to force a system into a desired state in a finite time or number of steps, is a fundamental problem studied extensively in control systems theory with structural controllability recently gaining renewed interest. In distributed control systems, possible control relations are limited by the underlying network (graph) transmitting the control signals from a single controller or set of controllers. Attackers may seek to disrupt these relations or compromise intermediate nodes, thereby gaining partial or total control.  For a defender to re-gain full or partial control, it is therefore critical to rapidly reconstruct the control graph as far as possible. Failing to achieve this may allow the attacker to cause further disruptions, and may --- as in the case of electric power networks --- also violate real-time constraints leading to catastrophic loss of control. However, as this problem is known to be computationally hard, approximations are required particularly for larger graphs. We therefore propose a reconstruction algorithm for (directed) control graphs of bounded tree width embedded in Erdős-Rényi random graphs based on recent work by Aazami and Stilp as well as Guo et al.
Keywords: power dominating sets, recovery from attacks, robustness of control systems and networks, structural controllability (ID#: 15-5871)
URL:  http://doi.acm.org/10.1145/2602087.2602095

 

Der-Yeuan Yu, Aanjhan Ranganathan, Thomas Locher, Srdjan Capkun, David Basin. “Short Paper: Detection of GPS Spoofing Attacks in Power Grids.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks. July 2014, Pages 99-104. doi:10.1145/2627393.2627398
Abstract: Power companies are deploying a multitude of sensors to monitor the energy grid. Measurements at different locations should be aligned in time to obtain the global state of the grid, and the industry therefore uses GPS as a common clock source. However, these sensors are exposed to GPS time spoofing attacks that cause misaligned aggregated measurements, leading to inaccurate monitoring that affects power stability and line fault contingencies. In this paper, we analyze the resilience of phasor measurement sensors, which record voltages and currents, to GPS spoofing performed by an adversary external to the system. We propose a solution that leverages the characteristics of multiple sensors in the power grid to limit the feasibility of such attacks. In order to increase the robustness of wide-area power grid monitoring, we evaluate mechanisms that allow collaboration among GPS receivers to detect spoofing attacks. We apply multilateration techniques to allow a set of GPS receivers to locate a false GPS signal source. Using simulations, we show that receivers sharing a local clock can locate nearby spoofing adversaries with sufficient confidence.
Keywords: clock synchronization, gps spoofing, power grids (ID#: 15-5872)
URL: http://doi.acm.org/10.1145/2627393.2627398


Ayan Banerjee, Sandeep K. S. Gupta. “Model Based Code Generation for Medical Cyber Physical Systems.” MMA '14 Proceedings of the 1st Workshop on Mobile Medical Applications, November 2014, Pages 22-27. doi:10.1145/2676431.2676646
Abstract: Deployment of medical devices on human body in unsupervised environment makes their operation safety critical. Software errors such as unbounded memory access or unreachable critical alarms can cause life threatening consequences in these medical cyber-physical systems (MCPSes), where software in medical devices monitor and control human physiology. Further, implementation of complex control strategy in inherently resource constrained medical devices require careful evaluation of runtime characteristics of the software. Such stringent requirements causes errors in manual implementation, which can be only detected by static analysis tools possibly inflicting high cost of redesigning. To avoid such inefficiencies this paper proposes an automatic code generator with assurance on safety from errors such as out-of-bound memory access, unreachable code, and race conditions. The proposed code generator was evaluated against manually written code of a software benchmark for sensors BSNBench in terms of possible optimizations using conditional X propagation. The generated code was found to be 9.3% more optimized than BSNBench code. The generated code was also tested using static analysis tool, Frama-c, and showed no errors.
Keywords: code synthesis, model based code generation, sensor networks, software errors, static analysis for sensors (ID#: 15-5873)
URL: http://doi.acm.org/10.1145/2676431.2676646

 

Sabine Theis, Thomas Alexander, Matthias Wille. “The Nexus of Human Factors in Cyber-Physical Systems: Ergonomics of Eyewear for Industrial Applications.” ISWC '14 Adjunct Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program, September 2014, Pages 217-220. doi:10.1145/2641248.2645639
Abstract: Smart eyewear devices may serve as advanced interfaces between cyber-physical systems (CPS) and workers by integrating digital information into the visual field. We have addressed ergonomic issues related to the use of a ruggedized head-mounted display (HMD) (Liteye 750A, see-through and look-around mode) and a conventional screen during a half-day day working shift (N=60). We only found minor physiological effects of the HMD, resulting into inflexible head posture, higher muscle activity over time of the left M. Splenius capitis and low performance given its look-around mode.
Keywords: cyber-physical systems (CPS), wearable computing (ID#: 15-5874)
URL: http://dl.acm.org/citation.cfm?id=2645639

 

Radha Poovendran. “Passivity Framework for Modeling, Mitigating, and Composing Attacks on Networked Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 29-30. doi:10.1145/2566468.2566470
Abstract: Cyber-physical systems (CPS) consist of a tight coupling between cyber (sensing and computation) and physical (actuation and control) components. As a result of this coupling, CPS are vulnerable to both known and emerging cyber attacks, which can degrade the safety, availability, and reliability of the system. A key step towards guaranteeing CPS operation in the presence of threats is developing quantitative models of attacks and their impact on the system and express them in the language of CPS. Traditionally, such models have been introduced within the framework of formal methods and verification. In this talk, we present a control-theoretic modeling framework. We demonstrate that the control-theoretic approach can capture the adaptive and time-varying strategic interaction between the adversary and the targeted system. Furthermore, control theory provides a common language in which to describe both the physical dynamics of the system, as well as the impact of the attack and defense. In particular, we provide a passivity-based approach for modeling and mitigating jamming and wormhole attacks. We demonstrate that passivity enables composition of multiple attack and defense mechanisms, allowing characterization of the overall performance of the system under attack. Our view is that the formal methods and the control-based approaches are complementary.
Keywords: cyber physical systems, network security, passivity (ID#: 15-5875)
URL: http://doi.acm.org/10.1145/2566468.2566470

 

Ye Li, Richard West, Eric Missimer. “A Virtualized Separation Kernel for Mixed Criticality Systems.” VEE '14 Proceedings of the 10th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, March 2014, Pages 201-212. doi:10.1145/2674025.2576206
Abstract: Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. In this paper, we discuss the design of the Quest-V separation kernel, which partitions services of different criticalities in separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. Moreover, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes.
Keywords: chip-level distributed system, separation kernel (ID#: 15-5876)
URL: http://doi.acm.org/10.1145/2674025.2576206

 

David Formby, Sang Shin Jung, John Copeland, Raheem Beyah. “An Empirical Study of TCP Vulnerabilities in Critical Power System Devices.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 39-44. doi:10.1145/2667190.2667196
Abstract: Implementations of the TCP/IP protocol suite have been patched for decades to reduce the threat of TCP sequence number prediction attacks. TCP, in particular, has been adopted to many devices in the power grid as a transport layer for their applications since it provides reliability. Even though this threat has been well-known for almost three decades, this does not hold true in power grid networks; weak TCP sequence number generation can still be found in many devices used throughout the power grid. Although our analysis only covers one substation, we believe that this is without loss of generality given: 1) the pervasiveness of the flaws throughout the substation devices; and 2) the prominence of the vendors. In this paper, we show how much TCP initial sequence numbers (ISNs) are still predictable and how time is strongly correlated with TCP ISN generation. We collected power grid network traffic from a live substation for six months, and we measured TCP ISN differences and their time differences between TCP connection establishments. In the live substation, we found three unique vendors (135 devices, 68%) from a total of eight vendors (196 devices) running TCP that show strongly predictable patterns of TCP ISN generation.
Keywords: dnp3, power grid, scada, tcp sequence number, tcp sequence prediction (ID#: 15-5877)
URL:  http://doi.acm.org/10.1145/2667190.2667196

 

Gerold Hoelzl, Alois Ferscha, Peter Halbmayer, Welma Pereira. “Goal Oriented Smart Watches for Cyber Physical Superorganisms.” UbiComp '14 Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, September 2014, Pages 1071-1076. doi:10.1145/2638728.2659395
Abstract: We didn't start the fire. It was always burning since technology became integrated into wearable things that can be traced back to the early 1500s. This earliest forms of wearable technology were manifested as pocket watches. Of course technology changed and evolved, but again it might be the watch, now in form of a wrist worn smart watch, that could carve the way towards an always on, large scale, planet spanning, body sensor network. The challenge arises on how to handle this enormous scale of upcoming smart watches and the produced data. This work highlights a strategy on how to make use of the massive amount of smart watches in building goal oriented, dynamically evolving network structures that autonomously adapt to changes in the smart watch ecosystem like cells do in the human organism.
Keywords: (Not provided) (ID#: 15-5878)
URL: http://doi.acm.org/10.1145/2638728.2659395

 

Zhenqi Huang, Yu Wang, Sayan Mitra, Geir E. Dullerud. “On the Cost of Differential Privacy in Distributed Control Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 105-114. doi:10.1145/2566468.2566474
Abstract: Individuals sharing information can improve the cost or performance of a distributed control system. But, sharing may also violate privacy. We develop a general framework for studying the cost of differential privacy in systems where a collection of agents, with coupled dynamics, communicate for sensing their shared environment while pursuing individual preferences. First, we propose a communication strategy that relies on adding carefully chosen random noise to agent states and show that it preserves differential privacy. Of course, the higher the standard deviation of the noise, the higher the cost of privacy. For linear distributed control systems with quadratic cost functions, the standard deviation becomes independent of the number agents and it decays with the maximum eigenvalue of the dynamics matrix. Furthermore, for stable dynamics, the noise to be added is independent of the number of agents as well as the time horizon up to which privacy is desired. Finally, we show that the cost of ε-differential privacy up to time T, for a linear stable system with N agents, is upper bounded by O(T3/Nε2).
Keywords: cyber-physical security, differential privacy, distributed control (ID#: 15-5879)
URL: http://doi.acm.org/10.1145/2566468.2566474
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
 

Differential Privacy, 2014, Part 1

 

 
SoS Logo

Differential Privacy, 2014

Part 1


The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. The work here looks at big data and cyber physical systems, as well as theoretic approaches. Citations are for articles published in 2014.  


Xiaojing Liao; Formby, D.; Day, C.; Beyah, R.A., “Towards Secure Metering Data Analysis via Distributed Differential Privacy,” Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, vol., no., pp. 780, 785, 23-26 June 2014. doi:10.1109/DSN.2014.82
Abstract: The future electrical grid, i.e., smart grid, will utilize appliance-level control to provide sustainable power usage and flexible energy utilization. However, load trace monitoring for appliance-level control poses privacy concerns with inferring private information. In this paper, we introduce a privacy-preserving and fine-grained power load data analysis mechanism for appliance-level peak-time load balance control in the smart grid. The proposed technique provides rigorous provable privacy and an accuracy guarantee based on distributed differential privacy. We simulate the scheme as privacy modules in the smart meter and the concentrator, and evaluate its performance under a real-world power usage dataset, which validates the efficiency and accuracy of the proposed scheme.
Keywords: data analysis; data privacy; domestic appliances; load (electric); power engineering computing; smart meters; smart power grids; appliance-level control; appliance-level peak-time load balance control; concentrator; distributed differential privacy; electrical grid; fine-grained power load data analysis mechanism; flexible energy utilization; load trace monitoring; metering data analysis; performance evaluation; privacy-preserving load data analysis mechanism; smart grid; smart meter; sustainable power usage; Accuracy; Home appliances; Noise; Power demand; Privacy; Smart grids; Smart meters (ID#: 15-5909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903641&isnumber=6903544

 

Ren Hongde; Wang Shuo; Li Hui, “Differential Privacy Data Aggregation Optimizing Method and Application to Data Visualization,” Electronics, Computer and Applications, 2014 IEEE Workshop on, vol, no., pp. 54, 58, 8-9 May 2014. doi:10.1109/IWECA.2014.6845555
Abstract: This article explores the challenges in data privacy within the big data era with specific focus on differential privacy of social media data and its geospatial realization within a Cloud-based research environment. By using differential privacy method, this paper achieves the distortion of the data by adding noise to protect data privacy. Furthermore, this article presents the IDP k-means Aggregation Optimizing Method to decrease the overlap and superposition of massive data visualization. Finally this paper combines IDP k-means Aggregation Optimizing Method with differential privacy method to protect data privacy. The outcome of this research is a set of underpinning formal models of differential privacy that reflect the geospatial tools challenges faced with location-based information, and the implementation of a suite of Cloud-based tools illustrating how these tools support an extensive range of data privacy demands.
Keywords: Big Data; cloud computing; data privacy; data visualisation; IDP k-means aggregation optimizing method; cloud-based research environment; differential privacy data aggregation; differential privacy method; formal models; geospatial realization; geospatial tools; location-based information; social media data; Algorithm design and analysis; Visualization; Data Visualization; aggregation optimizing; differential privacy; massive data  (ID#: 15-5910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845555&isnumber=6845536

 

Barthe, G.; Gaboardi, M.; Gallego Arias, E.J.; Hsu, J.; Kunz, C.; Strub, P.-Y., “Proving Differential Privacy in Hoare Logic,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 411, 424, 19-22 July 2014. doi:10.1109/CSF.2014.36
Abstract: Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.
Keywords: data privacy; formal logic; Hoare logic; Hoare specification; differential privacy literature; many 2-safety properties; nonprobabilistic programs; nonrelational reasoning; privacy-preserving computation; quantitative 2-safety property; verification techniques; worst-case notion; Data privacy; Databases; Privacy; Probabilistic logic; Safety; Standards; Synchronization; differential privacy; hoare logic; privacy; probabilistic hoare logic; relational hoare logic; verification (ID#: 15-5911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957126&isnumber=6957090

 

Yilin Shen; Hongxia Jin, “Privacy-Preserving Personalized Recommendation: An Instance-Based Approach via Differential Privacy,” Data Mining (ICDM), 2014 IEEE International Conference on, vol., on., pp. 540, 549, 14-17 Dec. 2014. doi:10.1109/ICDM.2014.140
Abstract: Recommender systems become increasingly popular and widely applied nowadays. The release of users' private data is required to provide users accurate recommendations, yet this has been shown to put users at risk. Unfortunately, existing privacy-preserving methods are either developed under trusted server settings with impractical private recommender systems or lack of strong privacy guarantees. In this paper, we develop the first lightweight and provably private solution for personalized recommendation, under untrusted server settings. In this novel setting, users' private data is obfuscated before leaving their private devices, giving users greater control on their data and service providers less responsibility on privacy protections. More importantly, our approach enables the existing recommender systems (with no changes needed) to directly use perturbed data, rendering our solution very desirable in practice. We develop our data perturbation approach on differential privacy, the state-of-the-art privacy model with lightweight computation and strong but provable privacy guarantees. In order to achieve useful and feasible perturbations, we first design a novel relaxed admissible mechanism enabling the injection of flexible instance-based noises. Using this novel mechanism, our data perturbation approach, incorporating the noise calibration and learning techniques, obtains perturbed user data with both theoretical privacy and utility guarantees. Our empirical evaluation on large-scale real-world datasets not only shows its high recommendation accuracy but also illustrates the negligible computational overhead on both personal computers and smart phones. As such, we are able to meet two contradictory goals, privacy preservation and recommendation accuracy. This practical technology helps to gain user adoption with strong privacy protection and benefit companies with high-quality personalized services on perturbed user data.
Keywords: calibration; data privacy; personal computing; recommender systems; trusted computing; computational overhead; data perturbation; differential privacy; high quality personalized services; noise calibration; perturbed user data; privacy preservation; privacy protections; privacy-preserving methods; privacy-preserving personalized recommendation; private recommender systems; provable privacy guarantees; recommendation accuracy; smart phones; strong privacy protection; theoretical privacy; untrusted server settings; user adoption; user private data; utility guarantees; Aggregates; Data privacy; Noise; Privacy; Sensitivity; Servers; Vectors; Data Perturbation; Differential Privacy; Learning and Optimization; Probabilistic Analysis; Recommender System (ID#: 15-5912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023371&isnumber=7023305

 

Jing Zhao; Taeho Jung; Yu Wang; Xiangyang Li, “Achieving Differential Privacy of Data Disclosure in the Smart Grid,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 504, 512, April 27 2014 - May 2 2014. doi:10.1109/INFOCOM.2014.6847974
Abstract: The smart grid introduces new privacy implications to individuals and their family due to the fine-grained usage data collection. For example, smart metering data could reveal highly accurate real-time home appliance energy load, which may be used to infer the human activities inside the houses. One effective way to hide actual appliance loads from the outsiders is Battery-based Load Hiding (BLH), in which a battery is installed for each household and smartly controlled to store and supply power to the appliances. Even though such technique has been demonstrated useful and can prevent certain types of attacks, none of existing BLH works can provide probably privacy-preserving mechanisms. In this paper, we investigate the privacy of smart meters via differential privacy. We first analyze the current existing BLH methods and show that they cannot guarantee differential privacy in the BLH problem. We then propose a novel randomized BLH algorithm which successfully assures differential privacy, and further propose the Multitasking-BLH-Exp3 algorithm which adaptively updates the BLH algorithm based on the context and the constraints. Results from extensive simulations show the efficiency and effectiveness of the proposed method over existing BLH methods.
Keywords: data acquisition; domestic appliances; smart meters; smart power grids; BLH methods; battery-based load hiding; data disclosure; fine-grained usage data collection; multitasking-BLH-Exp3 algorithm; privacy-preserving mechanisms; real-time home appliance energy load; smart grid; smart metering data; smart meters via differential privacy; Batteries; Data privacy; Energy consumption; Home appliances; Noise; Privacy; Smart meters; Data Disclosure; Differential Privacy; Smart Grid; Smart Meter (ID#: 15-5913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847974&isnumber=6847911

 

Hsu, J.; Gaboardi, M.; Haeberlen, A.; Khanna, S.; Narayan, A.; Pierce, B.C.; Roth, A., “Differential Privacy: An Economic Method for Choosing Epsilon,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 398, 410, 19-22 July 2014. doi:10.1109/CSF.2014.35
Abstract: Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something about the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
Keywords: data analysis; data privacy; Epsilon; data analyst; differential privacy; differentially private algorithms; economic method; privacy guarantee; Accuracy; Analytical models; Cost function; Data models; Data privacy; Databases; Privacy; Differential Privacy (ID#: 15-5914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957125&isnumber=6957090

 

Weina Wang; Lei Ying; Junshan Zhang, “On the Relation Between Identifiability, Differential Privacy, and Mutual-Information Privacy,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 1086, 1092, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028576
Abstract: This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a privacy-distortion framework, where the distortion is defined to be the expected Hamming distance between the input and output databases, we establish some fundamental connections between these three privacy notions. Given a maximum distortion D, let ε*i(D) denote the smallest (best) identifiability level, and ε*d(D) the smallest differential privacy level. Then we characterize ε*i(D) and ε*d(D), and prove that ε*i(D) - εx ≤ ε*d(D) ≤ ε*i(D) for D in some range, where εx is a constant depending on the distribution of the original database X, and diminishes to zero when the distribution of X is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given a maximum distortion D in some range, there is a mechanism that optimizes the identifiability level and also achieves the best mutual-information privacy.
Keywords: data privacy; database management systems; Hamming distance; differential privacy level; identifiability level; input databases; maximum distortion; mutual-information privacy; output databases; privacy-distortion framework; Data analysis; Data privacy; Databases; Mutual information; Privacy; Random variables (ID#: 15-5915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028576&isnumber=7028426

 

Shrivastva, K.M.P.; Rizvi, M.A.; Singh, S., “Big Data Privacy Based on Differential Privacy a Hope for Big Data,” Computational Intelligence and Communication Networks (CICN), 2014 International Conference on, vol., no., pp. 776,781, 14-16 Nov. 2014. doi:10.1109/CICN.2014.167
Abstract: In era of information age, due to different electronic, information & communication technology devices and process like sensors, cloud, individual archives, social networks, internet activities and enterprise data are growing exponentially. The most challenging issues are how to effectively manage these large and different type of data. Big data is one of the term named for this large and different type of data. Due to its extraordinary scale, privacy and security is one of the critical challenge of big data. At the every stage of managing the big data there are chances that privacy may be disclose. Many techniques have been suggested and implemented for privacy preservation of large data set like anonymization based, encryption based and others but unfortunately due to different characteristic (large volume, high speed, and unstructured data) of big data all these techniques are not fully suitable. In this paper we have deeply analyzed, discussed and suggested how an existing approach “differential privacy” is suitable for big data. Initially we have discussed about differential privacy and later analyze how it is suitable for big data.
Keywords: Big Data; cryptography; data privacy; anonymization based data set; big data privacy; big data security; differential privacy; electronic devices; encryption based data set; information age; information and communication technology devices; privacy preservation; Big data; Data privacy; Databases; Encryption; Noise; Privacy; Anonymization; Big data privacy; Differential privacy; Privacy approaches (ID#: 15-5916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7065587&isnumber=7065338

 

Quan Geng; Viswanath, P., “The Optimal Mechanism in Differential Privacy,” Information Theory (ISIT), 2014 IEEE International Symposium on, vol., no., pp. 2371, 2375, June 29 2014 – July 4 2014. doi:10.1109/ISIT.2014.6875258
Abstract: Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this work we study the fundamental tradeoff between privacy and utility in differential privacy. We derive the optimal ε-differentially private mechanism for single real-valued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions, which can be viewed as a geometric mixture of uniform probability distributions. In the context of ℓ1 and ℓ2 utility functions, we show that the standard Laplacian mechanism, which has been widely used in the literature, is asymptotically optimal in the high privacy regime, while in the low privacy regime, the staircase mechanism performs exponentially better than the Laplacian mechanism. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.
Keywords: Laplace equations; minimisation; statistical databases; statistical distributions; ℓ1 utility functions; ℓ2 utility functions; Laplacian mechanism; aggregate information; cost-minimization framework; differential privacy; geometric mixture; high privacy regime; low privacy regime; noise probability distributions; optimal ε-differentially private mechanism; real-valued query function; staircase-shaped probability density functions; statistical database; uniform probability distributions; utility-maximization framework; Data privacy; Databases; Laplace equations; Noise; Privacy; Probability density function; Probability distribution (ID#: 15-5917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875258&isnumber=6874773

 

Zuxing Li; Oechtering, T.J., “Differential Privacy in Parallel Distributed Bayesian Detections,” Information Fusion (FUSION), 2014 17th International Conference on, vol., no., pp. 1, 7, 7-10 July 2014. doi:(not provided)
Abstract: In this paper, the differential privacy problem in parallel distributed detections is studied in the Bayesian formulation. The privacy risk is evaluated by the minimum detection cost for the fusion node to infer the private random phenomenon. Different from the privacy-unconstrained distributed Bayesian detection problem, the optimal operation point of a remote decision maker can be on the boundary of the privacy-unconstrained operation region or in the intersection of privacy constraint hyperplanes. Therefore, for a remote decision maker in the optimal privacy-constrained distributed detection design, it is sufficient to consider a deterministic linear likelihood combination test or a randomized decision strategy of two linear likelihood combination tests which achieves the optimal operation point in each case. Such an insight indicates that the existing algorithm can be reused by incorporating the privacy constraint. The trade-off between detection and privacy metrics will be illustrated in a numerical example.
Keywords: Bayes methods; data privacy; decision making; deterministic algorithms; parallel algorithms; random processes; Bayesian formulation; deterministic linear likelihood combination test; differential privacy problem; fusion node; minimum detection cost; optimal privacy-constrained distributed detection design; parallel distributed detections; privacy constraint hyperplanes; privacy risk; privacy-unconstrained distributed Bayesian detection problem; privacy-unconstrained operation region; private random phenomenon; randomized decision strategy; remote decision maker; Data privacy; Integrated circuits; Measurement; Optimization; Phase frequency detector; Privacy (ID#: 15-5918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916169&isnumber=6915967

 

Yu Wang; Zhenqi Huang; Mitra, S.; Dullerud, G.E., “Entropy-Minimizing Mechanism for Differential Privacy of Discrete-Time Linear Feedback Systems,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2130, 2135, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039713
Abstract: The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to metric spaces to study a mechanism that randomizes a deterministic query by adding mean-zero noise to keep differential privacy. For one-shot queries, we show that ∈-differential privacy of an n-dimensional input implies a lower bound n - n ln(∈/2) on the entropy of the randomized output, and this lower bound is achieved by adding Laplacian noise. We then consider the ∈-differential privacy of a discrete-time linear feedback system in which noise is added to the system output at each time. The adversary estimates the system states from the output history. We show that, to keep the system ∈-differentially private, the output entropy is bounded below, and this lower bound is achieves by an explicit mechanism.
Keywords: discrete time systems; feedback; linear systems; ∈-differential privacy; Laplacian noise; deterministic query; discrete-time linear feedback systems; entropy-minimizing mechanism; mean-zero noise; metric space; n-dimensional input; one-shot query; private query; randomized output; system output; system states; Entropy; History; Measurement; Noise; Privacy; Probability distribution; Random variables (ID#: 15-5919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039713&isnumber=7039338

 

Shang Shang; Wang, T.; Cuff, P.; Kulkarni, S., “The Application of Differential Privacy for Rank Aggregation: Privacy and Accuracy,” Information Fusion (FUSION), 2014 17th International Conference on, vol, no., pp. 1, 7, 7-10 July 2014. doi:(not provided)
Abstract: The potential risk of privacy leakage prevents users from sharing their honest opinions on social platforms. This paper addresses the problem of privacy preservation if the query returns the histogram of rankings. The framework of differential privacy is applied to rank aggregation. The error probability of the aggregated ranking is analyzed as a result of noise added in order to achieve differential privacy. Upper bounds on the error rates for any positional ranking rule are derived under the assumption that profiles are uniformly distributed. Simulation results are provided to validate the probabilistic analysis.
Keywords: data privacy; probability; social networking (online); differential privacy; error probability; honest opinions; positional ranking rule; privacy leakage; privacy preservation; probabilistic analysis; rank aggregation; ranking histogram; social platforms; Algorithm design and analysis; Error analysis; Histograms; Noise; Privacy; Upper bound; Vectors; Accuracy; Privacy; Rank Aggregation (ID#: 15-5920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916096&isnumber=6915967

 

Sarwate, A.D.; Sankar, L., “A Rate-Distortion Perspective on Local Differential Privacy,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol, no., pp. 903, 908, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028550
Abstract: Local differential privacy is a model for privacy in which an untrusted statistician collects data from individuals who mask their data before revealing it. While randomized response has shown to be a good strategy when the statistician's goal is to estimate a parameter of the population, we consider instead the problem of locally private data publishing, in which the data collector must publish a version of the data it has collected. We model utility by a distortion measure and consider privacy mechanisms that act via a memoryless channel operating on the data. If we consider a the source distribution to be unknown but in a class of distributions, we arrive at a robust-rate distortion model for the privacy-distortion tradeoff. We show that under Hamming distortions, the differential privacy risk is lower bounded for all nontrivial distortions, and that the lower bound grows logarithmically in the alphabet size.
Keywords: data privacy; statistical analysis; Hamming distortion; local differential privacy risk; locally private data publishing; memoryless channnel; privacy mechanism; privacy-distortion tradeoff; rate-distortion; Data models; Data privacy; Databases; Distortion measurement; Mutual information; Privacy; Rate-distortion (ID#: 15-5921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028550&isnumber=7028426

 

Qihong Yu; Ruonan Rao, “An Improved Approach of Data Integration Based on Differential Privacy,” Progress in Informatics and Computing (PIC), 2014 International Conference on, vol., no., pp. 395, 399, 16-18 May 2014. doi:10.1109/PIC.2014.6972364
Abstract: Multiset operation and data transmission are the key operations for privacy preserving data integration because they involve the interaction of participants. This paper proposes an approach which contains anonymous multiset operation and distributed noise generation based on the existing researches and we apply it in data integration. Analysis shows that the improved approach provides security for data integration and has lower overhead than the existing researches.
Keywords: data integration; data privacy; anonymous multiset operation; data integration approach; data transmission; differential privacy; distributed noise generation; privacy preserving; Data integration; Data privacy; Data warehouses; Distributed databases; Encryption; Noise; data integration; multiset operation; noise generation (ID#: 15-5922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972364&isnumber=6972283

 

Niknami, N.; Abadi, M.; Deldar, F., “SpatialPDP: A Personalized Differentially Private Mechanism for Range Counting Queries over Spatial Databases,” Computer and Knowledge Engineering (ICCKE), 2014 4th International eConference on, vol, no., pp. 709, 715, 29-30 Oct. 2014. doi:10.1109/ICCKE.2014.6993414
Abstract: Spatial databases are rapidly growing due to the large amount of geometric data obtained from geographic information systems, geomarketing, traffic control, and so on. Range counting queries are among the most common queries over spatial databases. They allow us to describe a region in a geometric space and then retrieve some statistics about geometric objects falling within it. Quadtree-based spatial indices are usually used by spatial databases to speed up range counting queries. Privacy protection is a major concern when answering these queries. The reason is that an adversary observing changes in query answers could induce the presence or absence of a particular geometric object in a spatial database. Differential privacy addresses this problem by guaranteeing that the presence or absence of a geometric object has little effect on the query answers. However, the existing differentially private algorithms for spatial databases ignore the fact that different subregions of a geometric space may require different amounts of privacy protection. This causes that the same privacy budget is considered for different subregions, resulting in a significant increase in error measure for subregions with low privacy protection requirements or a major reduction in privacy measure for subregions with high privacy protection requirements. In this paper, we address these shortcomings by presenting SpatialPDP, a personalized differentially private mechanism for range counting queries over spatial databases. It uses a so-called personalized geometric budgeting strategy to allocate different privacy budgets to subregions with different privacy protection requirements. Our experimental results show that SpatialPDP can achieve a reasonable trade-off between error measure and differential privacy, in accordance with the privacy requirements of different subregions.
Keywords: data privacy; quadtrees; question answering (information retrieval); visual databases; SpatialPDP; differential privacy; error measure; geographic information system; geomarketing; geometric data; geometric objects; personalized differentially private mechanism; personalized geometric budgeting strategy; privacy budget; privacy protection requirement; private algorithms; quadtree-based spatial indices; query answers; range counting query; spatial databases; traffic control; Data privacy; Measurement uncertainty; Noise; Noise measurement; Privacy; Spatial databases; personalized geometric budgeting; personalized privacy; spatial database (ID#: 15-5923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993414&isnumber=6993332

 

Hill, R.; Hansen, M.; Janssen, E.; Sanders, S.A.; Heiman, J.R.; Li Xiong, “A Quantitative Approach for Evaluating the Utility of a Differentially Private Behavioral Science Dataset,” Healthcare Informatics (ICHI), 2014 IEEE International Conference on, vol., no., pp. 276, 284, 15-17 Sept. 2014. doi:10.1109/ICHI.2014.45
Abstract: Social scientists who collect large amounts of medical data value the privacy of their survey participants. As they follow participants through longitudinal studies, they develop unique profiles of these individuals. A growing challenge for these researchers is to maintain the privacy of their study participants, while sharing their data to facilitate research. Differential privacy is a new mechanism which promises improved privacy guarantees for statistical databases. We evaluate the utility of a differentially private dataset. Our results align with the theory of differential privacy and show when the number of records in the database is sufficiently larger than the number of cells covered by a database query, the number of statistical tests with results close to those performed on original data increases.
Keywords: data privacy; medical information systems; statistical analysis; database query; differential privacy; medical data; private behavioral science dataset; statistical database; statistical test; Data privacy; Databases; Histograms;Logistics; Noise; Privacy; Sensitivity; Behavioral Science; Data Privacy; Differential Privacy
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7052500&isnumber=7052453

 

Le Ny, J.; Touati, A.; Pappas, G.J., “Real-Time Privacy-Preserving Model-Based Estimation of Traffic Flows,” Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on, vol, no., pp. 92, 102, 14-17 April 2014. doi:10.1109/ICCPS.2014.6843714
Abstract: Road traffic information systems rely on data streams provided by various sensors, e.g., loop detectors, cameras, or GPS, containing potentially sensitive location information about private users. This paper presents an approach to enhance real-time traffic state estimators using fixed sensors with a privacy-preserving scheme providing formal guarantees to the individuals traveling on the road network. Namely, our system implements differential privacy, a strong notion of privacy that protects users against adversaries with arbitrary side information. In contrast to previous privacy-preserving schemes for trajectory data and location-based services, our procedure relies heavily on a macroscopic hydrodynamic model of the aggregated traffic in order to limit the impact on estimation performance of the privacy-preserving mechanism. The practicality of the approach is illustrated with a differentially private reconstruction of a day of traffic on a section of I-880 North in California from raw single-loop detector data.
Keywords: data privacy; real-time systems; road traffic; state estimation; traffic information systems; data streams; real-time privacy-preserving model real-time traffic state estimators; road network; road traffic information systems; traffic flow estimation; Data privacy; Density measurement; Detectors; Privacy; Roads; Vehicles; Differential privacy; intelligent transportation systems; privacy-preserving data assimilation (ID#: 15-5924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843714&isnumber=6843703


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Differential Privacy, 2014, Part 2

 

 
SoS Logo

Differential Privacy, 2014

Part 2


The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. The work here looks at big data and cyber physical systems, as well as theoretic approaches. Citations are for articles published in 2014.


Anjum, Adeel; Anjum, Adnan, “Differentially Private K-Anonymity,” Frontiers of Information Technology (FIT), 2014 12th International Conference on, vol. no., pp. 153, 158, 17-19 Dec. 2014. doi:10.1109/FIT.2014.37
Abstract: Research in privacy preserving data publication can be broadly categorized in two classes. Syntactic privacy definitions have been under the cursor of the research community for the past many years. A lot of research is primarily dedicated to developing algorithms and notions for syntactic privacy that thwart the re-identification attacks. Sweeney and Samarati proposed a well-known syntactic privacy definition coined K-anonymity for thwarting linking attacks using quasi-identifiers. Thanks to its conceptual simplicity, K-anonymity has been widely implemented as a practicable definition of syntactic privacy, and owing to algorithmic advancement for K-anonymous versions of micro-data, K-anonymity has attained much anticipated popularity. Semantic privacy definitions do not take into account the adversarial background knowledge but rather forces the sanitization algorithms (mechanisms) to satisfy a strong semantic property by the way of random processes. Though semantic privacy definitions are theoretically immune to any kind of adversarial attacks, their applicability in real-life scenarios has come under criticism. In order to make the semantic definitions more practical, the research community has focused its attention towards combining the practicalness of syntactic privacy with the strength of semantic approaches [7] such that we may in the near future benefit from both research tracks.
Keywords: Data models; Data privacy; Noise measurement; Partitioning algorithms; Privacy; Semantics; Syntactics; Data Privacy; Differential Privacy; K-anonymity; Semantic Privacy; Syntactic Privacy (ID#: 15-6083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118391&isnumber=7118353

 

Zhigang Zhou; Hongli Zhang; Qiang Zhang; Yang Xu; Panpan Li, “Privacy-Preserving Granular Data Retrieval Indexes for Outsourced Cloud Data,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 601, 606, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036873
Abstract: Storage as a service has become an important paradigm in cloud computing for its great flexibility and economic savings. Since data owners no longer physically possess the storage of their data, it also brings many new challenges for data security and management. Several techniques have been investigated, including encryption, as well as fine-grained access control for enabling such services. However, these techniques just expresses the "Yes or No" problem, that is, whether the user has permissions to access the corresponding data. In this paper, we investigate the issue of how to provide different granular information views for different users. Our mechanism first constructs the relationship between the keywords and data files based on a Galois connection. And then we exploit data retrieval indexes with variable threshold, where granular data retrieval service can be supported by adjusting the threshold for different users. Moreover, to prevent privacy disclosure, we propose a differentially privacy release scheme based on the proposed index technique. We prove the privacy-preserving guarantee of the proposed mechanism, and the extensive experiments further demonstrate the validity of the proposed mechanism.
Keywords: cloud computing; data privacy; granular computing; information retrieval; outsourcing; Galois connection; access permissions; data files; data management; data owners; data security; differentially privacy release scheme; granular data retrieval service; granular information; outsourced cloud data; privacy disclosure prevention; privacy-preserving granular data retrieval indexes; privacy-preserving guarantee; storage-as-a-service; variable threshold; Access control; Cloud computing; Data privacy; Indexes; Lattices; Privacy; cloud computing; data indexes; differential privacy; fuzzy retrieval; granular data retrieval (ID#: 15-6084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036873&isnumber=7036769

 

Saravanan, M.; Thoufeeq, A.M.; Akshaya, S.; Jayasre Manchari, V.L., “Exploring New Privacy Approaches in a Scalable Classification Framework,” Data Science and Advanced Analytics (DSAA), 2014 International Conference on, vol., no., pp. 209, 215, Oct. 30 2014 - Nov. 1 2014. doi:10.1109/DSAA.2014.7058075
Abstract: Recent advancements in Information and Communication Technologies (ICT) enable many organizations to collect, store and control massive amount of various types of details of individuals from their regular transactions (credit card, mobile phone, smart meter etc.). While using these wealth of information for Personalized Recommendations provides enormous opportunities for applying data mining (or machine learning) tasks, there is a need to address the challenge of preserving individuals privacy during the time of running predictive analytics on Big Data. Privacy Preserving Data Mining (PPDM) on these applications is particularly challenging, because it involves and process large volume of complex, heterogeneous, and dynamic details of individuals. Ensuring that privacy-protected data remains useful in intended applications, such as building accurate data mining models or enabling complex analytic tasks, is essential. Differential Privacy has been tried with few of the PPDM methods and is immune to attacks with auxiliary information. In this paper, we propose a distributed implementation based on Map Reduce computing model for C4.5 Decision Tree algorithm and run extensive experiments on three different datasets using Hadoop Cluster. The novelty of this work is to experiment two different privacy methods: First method is to use perturbed data on decision tree algorithm for prediction in privacy-preserving data sharing and the second method is based on applying raw data to the privacy-preserving decision tree algorithm for private data analysis. In addition to this, we propose the combination of the methods as hybrid technique to maintain accuracy (Utility) and privacy in an acceptable level. The proposed privacy approaches has two potential benefits in the context of data mining tasks: it allows the service providers to outsource data mining tasks without exposing the raw data, and it allows data providers to share data access to third parties while limiting privacy risks.
Keywords: data mining; data privacy; decision trees; learning (artificial intelligence); C4.5 decision tree algorithm; Hadoop Cluster; ICT; big data; differential privacy; information and communication technologies; machine learning; map reduce computing model; personalized recommendation; privacy preserving data mining; private data analysis; scalable classification; Big data; Classification algorithms; Data privacy; Decision trees; Noise; Privacy; Scalability; Hybrid data privacy; Map Reduce Framework; Privacy Approaches; Privacy Preserving data Mining; Scalability (ID#: 15-6085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058075&isnumber=7058031

 

Paverd, A.; Martin, A.; Brown, I., “Privacy-Enhanced Bi-Directional Communication in the Smart Grid Using Trusted Computing,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 872, 877, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trusted computing; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-6086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609

 

Jun Yang; Yun Li, “Differentially Private Feature Selection,” Neural Networks (IJCNN), 2014 International Joint Conference on, vol., no., pp. 4182, 4189, 6-11 July 2014. doi:10.1109/IJCNN.2014.6889613
Abstract: The privacy-preserving data analysis has been gained significant interest across several research communities. The current researches mainly focus on privacy-preserving classification and regression. However, feature selection is also an essential component for data analysis, which can be used to reduce the data dimensionality and can be utilized to discover knowledge, such as inherent variables in data. In this paper, in order to efficiently mine sensitive data, a privacy preserving feature selection algorithm is proposed and analyzed in theory based on local learning and differential privacy. We also conduct some experiments on benchmark data sets. The Experimental results show that our algorithm can preserve the data privacy to some extent.
Keywords: data analysis; data mining; data privacy; learning (artificial intelligence); data dimensionality reduction; differential privacy; differentially private feature selection; feature selection; knowledge discovery; local learning; privacy preserving feature selection algorithm; privacy-preserving classification; privacy-preserving data analysis; privacy-preserving regression; Accuracy; Algorithm design and analysis; Computational modeling; Data privacy; Logistics; Privacy; Vectors (ID#: 15-6087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889613&isnumber=6889358

 

Koufogiannis, F.; Shuo Han; Pappas, G.J., “Computation of Privacy-Preserving Prices in Smart Grids,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2142, 2147, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039715
Abstract: Demand management through pricing is a modern approach that can improve the efficiency of modern power networks. However, computing optimal prices requires access to data that individuals consider private. We present a novel approach for computing prices while providing privacy guarantees under the differential privacy framework. Differentially private prices are computed through a distributed utility maximization problem with each individual perturbing their own utility function. Privacy concerning temporal localization and monitoring of an individual's activity is enforced in the process. The proposed scheme provides formal privacy guarantees and its performance-privacy trade-off is evaluated quantitatively.
Keywords: power system control; pricing; smart power grids; computation; demand management; differential privacy framework; distributed utility maximization problem; formal privacy; modern power networks; performance-privacy trade-off; pricing; privacy-preserving prices; smart grids; temporal localization; utility function; Electricity; Monitoring; Optimization; Power demand; Pricing; Privacy; Smart grids (ID#: 15-6088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039715&isnumber=7039338

 

Wentian Lu; Miklau, G.; Gupta, V., “Generating Private Synthetic Databases for Untrusted System Evaluation,” Data Engineering (ICDE), 2014 IEEE 30th International Conference on, vol., no., pp. 652, 663, March 31 2014 - April 4 2014. doi:10.1109/ICDE.2014.6816689
Abstract: Evaluating the performance of database systems is crucial when database vendors or researchers are developing new technologies. But such evaluation tasks rely heavily on actual data and query workloads that are often unavailable to researchers due to privacy restrictions. To overcome this barrier, we propose a framework for the release of a synthetic database which accurately models selected performance properties of the original database. We improve on prior work on synthetic database generation by providing a formal, rigorous guarantee of privacy. Accuracy is achieved by generating synthetic data using a carefully selected set of statistical properties of the original data which balance privacy loss with relevance to the given query workload. An important contribution of our framework is an extension of standard differential privacy to multiple tables.
Keywords: data privacy; database management systems; statistical analysis; trusted computing; balance privacy loss; database researchers; database vendors; differential privacy; privacy guarantee; privacy restrictions; private synthetic database generation; query workloads; statistical properties; synthetic data generation; untrusted system evaluation; Aggregates; Data privacy; Databases; Noise; Privacy; Sensitivity; Standards (ID#: 15-6089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816689&isnumber=6816620

 

Riboni, D.; Bettini, C., “Differentially-Private Release of Check-in Data for Venue Recommendation,” Pervasive Computing and Communications (PerCom), 2014 IEEE International Conference on, vol., no., pp.190,198, 24-28 March 2014. doi:10.1109/PerCom.2014.6813960
Abstract: Recommender systems suggesting venues offer very useful services to people on the move and a great business opportunity for advertisers. These systems suggest venues by matching the current context of the user with the venue features, and consider the popularity of venues, based on the number of visits (“check-ins”) that they received. Check-ins may be explicitly communicated by users to geo-social networks, or implicitly derived by analysing location data collected by mobile services. In general, the visibility of explicit check-ins is limited to friends in the social network, while the visibility of implicit check-ins limited to the service provider. Exposing check-ins to unauthorized users is a privacy threat since recurring presence in given locations may reveal political opinions, religious beliefs, or sexual orientation, as well as absence from other locations where the user is supposed to be. Hence, on one side mobile app providers host valuable information that recommender system providers would like to buy and use to improve their systems, and on the other we recognize serious privacy issues in releasing that information. In this paper, we solve this dilemma by providing formal privacy guarantees to users and trusted mobile providers while preserving the utility of check-in information for recommendation purposes. Our technique is based on the use of differential privacy methods integrated with a pre-filtering process, and protects against both an untrusted recommender system and its users, willing to infer the venues and sensitive locations visited by other users. Extensive experiments with a large dataset of real users' check-ins show the effectiveness of our methods.
Keywords: data privacy; mobile computing; recommender systems; social networking (online); advertisers; business opportunity; check-in data; differential privacy methods; differentially-private release; explicit check-ins; formal privacy; geo-social networks; implicit check-ins; location data analysis; mobile app providers; mobile services; political opinions; prefiltering process; religious beliefs; sexual orientation; untrusted recommender system; venue recommendation; Context; Data privacy; Mobile communication; Pervasive computing; Privacy; Recommender systems; Sensitivity (ID#: 15-6090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813960&isnumber=6813930

 

Patil, A.; Singh, S., “Differential Private Random Forest,” Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, vol., no., pp. 2623, 2630, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968348
Abstract: Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.
Keywords: data mining; data privacy; learning (artificial intelligence); Gini index; data mining algorithm; differential privacy; differential private random forest; information gain; max operator; personal information; private information; sensitive information; Accuracy; Data privacy; Indexes; Noise; Privacy; Sensitivity; Vegetation (ID#: 15-6091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968348&isnumber=6968191

 

Bassily, R.; Smith, A.; Thakurta, A., “Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds,” Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, vol., no., pp. 464, 473, 18-21 Oct. 2014. doi:10.1109/FOCS.2014.56
Abstract: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0) and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.
Keywords: computational complexity; convex programming; learning (artificial intelligence); minimisation; (ε, δ)-differential privacy; (ε, 0)-differential privacy; Lipschitz loss function; arbitrary loss function smoothing; machine learning; optimal nonprivate running time; oracle complexity; polynomial time; private convex empirical risk minimization; smooth function families; statistics; Algorithm design and analysis; Convex functions; Noise measurement; Optimization; Privacy; Risk management; Support vector machines (ID#: 15-6092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979031&isnumber=6978973

 

Le Ny, J.; Mohammady, M., “Differentially Private MIMO Filtering for Event Streams and Spatio-Temporal Monitoring,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2148, 2153, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039716
Abstract: Many large-scale systems such as intelligent transportation systems, smart grids or smart buildings collect data about the activities of their users to optimize their operations. In a typical scenario, signals originate from many sensors capturing events involving these users, and several statistics of interest need to be continuously published in real-time. Moreover, in order to encourage user participation, privacy issues need to be taken into consideration. This paper considers the problem of providing differential privacy guarantees for such multi-input multi-output systems operating continuously. We show in particular how to construct various extensions of the zero-forcing equalization mechanism, which we previously proposed for single-input single-output systems. We also describe an application to privately monitoring and forecasting occupancy in a building equipped with a dense network of motion detection sensors, which is useful for example to control its HVAC system.
Keywords: MIMO systems; filtering theory; sensors; HVAC system; differential privacy; differentially private MIMO filtering; event streams; intelligent transportation systems; large-scale systems; motion detection sensors; single-input single-output systems; smart buildings; smart grids; spatio temporal monitoring; zero-forcing equalization mechanism; Buildings; MIMO; Monitoring; Noise; Privacy; Sensitivity; Sensors (ID#: 15-6093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039716&isnumber=7039338

 

Chunchun Wu; Zuying Wei; Fan Wu; Guihai Chen, “DIARY: A Differentially Private and Approximately Revenue Maximizing Auction Mechanism for Secondary Spectrum Markets,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 625, 630, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036877
Abstract: It is urgent to solve the contradiction between limited spectrum resources and the increasing demand from the ever-growing wireless networks. Spectrum redistribution is a powerful way to mitigate the situation of spectrum scarcity. In contrast to existing truthful mechanisms for spectrum redistribution which aim to maximize the spectrum utilization and social welfare, we propose DIARY in this paper, which not only achieves approximate revenue maximization, but also guarantees bid privacy via differential privacy. Extensive simulations show that DIARY has substantial competitive advantages over existing mechanisms.
Keywords: data privacy; electronic commerce; radio networks; radio spectrum management; telecommunication industry; DIARY; approximately revenue maximization auction mechanism; differential privacy; differentially private mechanism; ever-growing wireless network; limited spectrum resource; secondary spectrum market; social welfare; spectrum redistribution; spectrum scarcity; spectrum utilization maximization; Cost accounting; Information systems; Interference; Privacy; Resource management; Security; Vectors (ID#: 15-6094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036877&isnumber=7036769

 

Tiwari, P.K.; Chaturvedi, S., “Publishing Set Valued Data via M-Privacy,”Advances in Engineering and Technology Research (ICAETR), 2014 International Conference on, vol., no., pp. 1, 6, 1-2 Aug. 2014. doi:10.1109/ICAETR.2014.7012814
Abstract: It is very important to achieve security of data in distributed databases. With increasing in the usability of distributed database security issues regarding it are also going to be more complex. M-privacy is a very effective technique which may be used to achieve security of distributed databases. Set-valued data provides huge opportunities for a variety of data mining tasks. Most of the present data publishing techniques for set-valued data are refers to horizontal division based privacy models. Differential privacy method is totally opposite to horizontal based privacy method; it provides higher privacy guarantee and it is also so vereign of an adversary's environment information and computational capability. Set-valued data have high dimensionality so not any single existing data publishing approach for differential privacy can be applied for both utility and scalability. This work provided detailed information about this new threat, and gave some assistance to resolve it. At the start we introduced the concept of m-privacy. This concept guarantees that the anonymous data will satisfies a given privacy check next to any group of up to m colluding data providers. After it we presented heuristic approach for exploiting the monotonicity of confidentiality constraints for proficiently inspecting m-privacy given a cluster of records. Next, we have presented a data provider-aware anonymization approach with adaptive m-privacy inspection strategies to guarantee high usefulness and m-privacy of anonymized data with effectiveness. Finally, we proposed secured multi-party calculation protocols for set valued data publishing with m-privacy.
Keywords: data mining; data privacy; distributed databases; adaptive m-privacy inspection strategies; anonymous data; computational capability; confidentiality constraints monotonicity; data mining tasks; data provider-aware anonymization approach; data security; distributed database security; environment information; heuristic approach; horizontal division based privacy models; privacy check; privacy guarantee; privacy method; secured multiparty calculation protocols; set-valued data publishing techniques; threat; Algorithm design and analysis; Computational modeling; Data privacy; Distributed databases; Privacy; Publishing; Taxonomy; data mining; privacy; set-valued dataset (ID#: 15-6095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012814&isnumber=7012782

 

Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Convex Optimization with Piecewise Affine Objectives,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2160, 2166, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039718
Abstract: Differential privacy is a recently proposed notion of privacy that provides strong privacy guarantees without any assumptions on the adversary. The paper studies the problem of computing a differentially private solution to convex optimization problems whose objective function is piecewise affine. Such problems are motivated by applications in which the affine functions that define the objective function contain sensitive user information. We propose several privacy preserving mechanisms and provide an analysis on the trade-offs between optimality and the level of privacy for these mechanisms. Numerical experiments are also presented to evaluate their performance in practice.
Keywords: data privacy; optimisation; affine functions; convex optimization problems; differentially private convex optimization; differentially private solution; piecewise affine objectives; privacy guarantees; privacy preserving mechanisms; sensitive user information; Convex functions; Data privacy; Databases; Linear programming; Optimization; Privacy; Sensitivity (ID#: 15-6096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039718&isnumber=7039338

 

Jia Dong Zhang; Ghinita, G.; Chi Yin Chow, “Differentially Private Location Recommendations in Geosocial Networks,” Mobile Data Management (MDM), 2014 IEEE 15th International Conference on, vol. 1, no., pp. 59, 68, 14-18 July 2014. doi:10.1109/MDM.2014.13
Abstract: Location-tagged social media have an increasingly important role in shaping behavior of individuals. With the help of location recommendations, users are able to learn about events, products or places of interest that are relevant to their preferences. User locations and movement patterns are available from geosocial networks such as Foursquare, mass transit logs or traffic monitoring systems. However, disclosing movement data raises serious privacy concerns, as the history of visited locations can reveal sensitive details about an individual's health status, alternative lifestyle, etc. In this paper, we investigate mechanisms to sanitize location data used in recommendations with the help of differential privacy. We also identify the main factors that must be taken into account to improve accuracy. Extensive experimental results on real-world datasets show that a careful choice of differential privacy technique leads to satisfactory location recommendation results.
Keywords: data privacy; recommender systems; social networking (online); differentially private location recommendations; geosocial networks; location data sanitation; Data privacy; History; Indexes; Markov processes; Privacy; Trajectory; Vegetation (ID#: 15-6097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916904&isnumber=6916883

 

Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Distributed Protocol for Electric Vehicle Charging,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 242, 249, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028462
Abstract: In distributed electric vehicle (EV) charging, an optimization problem is solved iteratively between a central server and the charging stations by exchanging coordination signals that are publicly available to all stations. The coordination signals depend on user demand reported by charging stations and may reveal private information of the users at the stations. From the public signals, an adversary can potentially decode private user information and put user privacy at risk. This paper develops a distributed EV charging algorithm that preserves differential privacy, which is a notion of privacy recently introduced and studied in theoretical computer science. The algorithm is based on the so-called Laplace mechanism, which perturbs the public signal with Laplace noise whose magnitude is determined by the sensitivity of the public signal with respect to changes in user information. The paper derives the sensitivity and analyzes the suboptimality of the differentially private charging algorithm. In particular, we obtain a bound on suboptimality by viewing the algorithm as an implementation of stochastic gradient descent. In the end, numerical experiments are performed to investigate various aspects of the algorithm when being used in practice, including the number of iterations and tradeoffs between privacy level and suboptimality.
Keywords: electric vehicles; gradient methods; protocols; stochastic programming; Laplace mechanism; Laplace noise; central server; differential private charging algorithm; differential private distributed protocol; distributed EV charging algorithm; distributed electric vehicle charging station; optimization problem; public signal sensitivity; stochastic gradient descent; theoretical computer science; user demand; Charging stations; Data privacy; Databases; Optimization; Privacy; Sensitivity; Vehicles (ID#: 15-6098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028462&isnumber=7028426

 

Jianwei Chen; Huadong Ma, “Privacy-Preserving Aggregation for Participatory Sensing with Efficient Group Management,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 2757, 2762, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037225
Abstract: Participatory sensing applications can learn the aggregate statistics over personal data to produce useful knowledge about the world. Since personal data may be privacy-sensitive, the aggregator should only gain desired statistics without learning anything about the personal data. To guarantee differential privacy of personal data under an untrusted aggregator, existing approaches encrypt the noisy personal data, and allow the aggregator to get a noisy sum. However, these approaches suffer from either high computation overhead, or lack of efficient group management to support dynamic joins and leaves, or node failures. In this paper, we propose a novel privacy-preserving aggregation scheme to address these issues in participatory sensing applications. In our scheme, we first design an efficient group management protocol to deal with participants' dynamic joins and leaves. Specifically, when a participant joins or leaves, only three participants need to update their encryption keys. Moreover, we leverage the future ciphertext buffering mechanism to deal with node failures, which is combined with the group management protocol making low communication overhead. The analysis indicates that our scheme achieves desired properties, and the performance evaluation demonstrates the scheme's efficiency in terms of communication and computation overhead.
Keywords: cryptographic protocols; data privacy; ciphertext buffering mechanism; group management protocol; noisy personal data; participatory sensing; personal data privacy; privacy-preserving aggregation scheme; untrusted aggregator; Aggregates; Fault tolerance; Fault tolerant systems; Noise; Noise measurement; Privacy; Sensors  (ID#: 15-6099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037225&isnumber=7036769

 

Jongho Won; Ma, C.Y.T.; Yau, D.K.Y.; Rao, N.S.V., “Proactive Fault-Tolerant Aggregation Protocol for Privacy-Assured Smart Metering,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 2804, 2812, April 27 2014 - May 2 2014. doi:10.1109/INFOCOM.2014.6848230
Abstract: Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively by using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.
Keywords: fault tolerance; smart meters; ciphertexts; communication efficiency; electricity consumption; fail-stop faults; privacy-assured smart metering; proactive fault-tolerant aggregation protocol; secret key sharing; Bandwidth; Fault tolerance; Fault tolerant systems; Noise; Privacy; Protocols; Smart meters (ID#: 15-6100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848230&isnumber=6847911

 

Pritha, P.V.G.R.; Suresh, N., “Implementation of Hummingbird 1s Cryptographic Algorithm for Low Cost RFID Tags Using LabVIEW,” Information Communication and Embedded Systems (ICICES), 2014 International Conference on, vol, no., pp. 1, 4, 27-28 Feb. 2014. doi:10.1109/ICICES.2014.7034182
Abstract: Hummingbird is a novel ultra-light weight cryptographic encryption scheme used for RFID applications of privacy-preserving identification and mutual authentication protocols, motivated by the well-known enigma machine. Hummingbird has a precise response time and the design of small block size will reduce the power consumption requirements. This algorithm is shown as it prevents from the common attacks like Linear and differential cryptanalysis. The properties of privacy identification and mutual authentication are together investigated in this algorithm. This is implemented using the LABVIEW software.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; virtual instrumentation; Hummingbird 1s cryptographic algorithm; LabVIEW software; RFID tags; differential cryptanalysis; enigma machine; linear cryptanalysis; mutual authentication protocols; privacy-preserving identification; ultra-light weight cryptographic encryption scheme; Algorithm design and analysis; Authentication; Ciphers; Encryption; Radiofrequency identification; Software; lightweight cryptography scheme; mutual authentication protocols; privacy-preserving identification (ID#: 15-6101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034182&isnumber=7033740


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Game Theoretic Security, 2014

 

 
SoS Logo

Game Theoretic Security, 2014


Game theory has historically been the provenance of social sciences such as economics, political science, and psychology. Game theory has developed into an umbrella term for the logical side of science that includes both human and non-human actors like computers. It has been used extensively in wireless networks research to develop understanding of stable operation points for networks made of autonomous/selfish nodes. The nodes are considered as the players. Utility functions are often chosen to correspond to achieved connection rate or similar technical metrics. In security, the computer game framework is used to anticipate and analyze intruder and administrator concurrent interactions within the network. Research cited here was presented in 2013 and 2014.


Jinghao Shi, Zhangyu Guan, Chunming Qiao, Tommaso Melodia, Dimitrios Koutsonikolas, Geoffrey Challen. “Crowdsourcing Access Network Spectrum Allocation Using Smartphones.” HotNets-XIII Proceedings of the 13th ACM Workshop on Hot Topics in Networks, October 2014, Pages 17. doi:10.1145/2670518.2673866
Abstract: The hundreds of millions of deployed smartphones provide an unprecedented opportunity to collect data to monitor, debug, and continuously adapt wireless networks to improve performance. In contrast with previous mobile devices, such as laptops, smartphones are always on but mostly idle, making them available to perform measurements that help other nearby active devices make better use of available network resources. We present the design of PocketSniffer, a system delivering wireless measurements from smartphones both to network administrators for monitoring and debugging purposes and to algorithms performing realtime network adaptation. By collecting data from smartphones, PocketSniffer supports novel adaptation algorithms designed around common deployment scenarios involving both cooperative and self-interested clients and networks. We present preliminary results from a prototype and discuss challenges to realizing this vision.
Keywords: Smartphones, crowdsourcing, monitoring (ID#: 15-5880)
URLhttp://doi.acm.org/10.1145/2670518.2673866

 

Gilles Barthe, Cédric Fournet, Benjamin Grégoire, Pierre-Yves Strub, Nikhil Swamy, Santiago Zanella-Béguelin. “Probabilistic Relational Verification for Cryptographic Implementations.” POPL '14 Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2014, Pages 193-205. doi:10.1145/2535838.2535847
Abstract: Relational program logics have been used for mechanizing formal proofs of various cryptographic constructions. With an eye towards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types. The distinguishing feature of F* is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through careful language design, we adapt the F* typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F*, including its abstraction facilities for modular reasoning about program fragments. We evaluate RF* experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security properties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF* by formalizing in Coq a core probabilistic λ calculus and a relational refinement type system and proving the soundness of the latter against a denotational semantics of the probabilistic lambda λ calculus.
Keywords: probabilistic programming, program logics (ID#: 15-5881)
URL: http://doi.acm.org/10.1145/2535838.2535847

 

Patrick McDaniel, Trent Jaeger, Thomas F. La Porta, Nicolas Papernot, Robert J. Walls, Alexander Kott, Lisa Marvel, Ananthram Swami, Prasant Mohapatra, Srikanth V. Krishnamurthy, Iulian Neamtiu. “Security and Science of Agility.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 13-19. doi:10.1145/2663474.2663476
Abstract: Moving target defenses alter the environment in response to adversarial action and perceived threats. Such defenses are a specific example of a broader class of system management techniques called system agility. In its fullest generality, agility is any reasoned modification to a system or environment in response to a functional, performance, or security need. This paper details a recently launched 10-year Cyber-Security Collaborative Research Alliance effort focused in-part on the development of a new science of system agility, of which moving target defenses are a central theme. In this context, the consortium seeks to address the questions of when, what, and how to employ changes to improve the security of an environment, as well as consider how to measure and weigh the effectiveness of different approaches to agility. We discuss several fundamental challenges in developing and using MTD maneuvers, and outline several broad classes of mechanisms that can be used to implement them. We conclude by detailing specific MTD mechanisms used to adaptively quarantine vulnerable code in Android applications, and consider ways of comparing cost and payout of its use.
Keywords: agility, moving target defenses (ID#: 15-5882)
URLhttp://doi.acm.org/10.1145/2663474.2663476

 

Prabhu Natarajan, Trong Nghia Hoang, Yongkang Wong, Kian Hsiang Low, Mohan Kankanhalli. “Scalable Decision-Theoretic Coordination and Control for Real-time Active Multi-Camera Surveillance.” ICDSC '14 Proceedings of the International Conference on Distributed Smart Cameras, November 2014, Article No. 38. doi:10.1145/2659021.2659042
Abstract: This paper presents an overview of our novel decision-theoretic multi-agent approach for controlling and coordinating multiple active cameras in surveillance. In this approach, a surveillance task is modeled as a stochastic optimization problem, where the active cameras are controlled and coordinated to achieve the desired surveillance goal in presence of uncertainties. We enumerate the practical issues in active camera surveillance and discuss how these issues are addressed in our decision-theoretic approach. We focus on two novel surveillance tasks: maximize the number of targets observed in active cameras with guaranteed image resolution and to improve the fairness in observation of multiple targets. We discuss the overview of our novel decision-theoretic frameworks: Markov Decision Process and Partially Observable Markov Decision Process frameworks for coordinating active cameras in uncertain and partially occluded environments.
Keywords: Active camera networks, Multi-camera coordination and control, Smart camera networks, Surveillance and security (ID#: 15-5883)
URL: http://doi.acm.org/10.1145/2659021.2659042

 

Koen Claessen, Michał H. Pałka. “Splittable Pseudorandom Number Generators Using Cryptographic Hashing.” Haskell '13 Proceedings of the 2013 ACM SIGPLAN Symposium on Haskell, September 2013, Pages 47-58. doi:10.1145/2503778.2503784
Abstract: We propose a new splittable pseudorandom number generator (PRNG) based on a cryptographic hash function. Splittable PRNGs, in contrast to linear PRNGs, allow the creation of two (seemingly) independent generators from a given random number generator. Splittable PRNGs are very useful for structuring purely functional programs, as they avoid the need for threading around state. We show that the currently known and used splittable PRNGs are either not efficient enough, have inherent flaws, or lack formal arguments about their randomness. In contrast, our proposed generator can be implemented efficiently, and comes with a formal statement and proofs that quantify how 'random' the results are that are generated. The provided proofs give strong randomness guarantees under assumptions commonly made in cryptography.
Keywords: haskell, provable security, splittable pseudorandom number generators (ID#: 15-5884)
URLhttp://dl.acm.org/citation.cfm?doid=2503778.2503784

 

Fatemeh Vafaee. “Learning the Structure of Large-Scale Bayesian Networks Using Genetic Algorithm.” GECCO '14 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 855-862. doi:10.1145/2576768.2598223
Abstract: Bayesian networks are probabilistic graphical models representing conditional dependencies among a set of random variables. Due to their concise representation of the joint probability distribution, Bayesian Networks are becoming incrementally popular models for knowledge representation and reasoning in various problem domains. However, learning the structure of the Bayesian networks is an NP-hard problem since the number of structures grows super-exponentially as the number of variables increases. This work therefore is aimed to propose a new hybrid structure learning algorithm that uses mutual dependencies to reduce the search space complexity and recruits the genetic algorithm to effectively search over the reduced space of possible structures. The proposed method is best suited for problems with medium to large number of variables and a limited dataset. It is shown that the proposed method achieves higher model's accuracy as compared to a series of popular structure learning algorithms particularly when the data size gets smaller.
Keywords: Bayesian networks, genetic algorithms, structure learning (ID#: 15-5885)
URLhttp://doi.acm.org/10.1145/2576768.2598223

 

Yitao Duan. “Distributed Key Generation for Encrypted Deduplication: Achieving the Strongest Privacy.” CCSW '14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 57-68. doi:10.1145/2664168.2664169
Abstract: Large-scale cloud storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent Encryption [27], and evolved into Message Locked Encryption [13] and the latest DupLESS architecture [12], strives to solve this problem. DupLESS relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first introduce a new security notion appropriate for the setting of deduplication and show that it is strictly stronger than all relevant notions. We then provide a rigorous proof of security against this notion, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm. We also introduce a distributed protocol that eliminates the need for the key server. This not only provides better protection but also allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.
Keywords: cloud computing security, deduplication, deterministic encryption (ID#: 15-5886)
URLhttp://doi.acm.org/10.1145/2664168.2664169

 

Javier Cámara, Gabriel A. Moreno, David Garlan. “Stochastic Game Analysis and Latency Awareness for Proactive Self-Adaptation.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 155-164. doi:10.1145/2593929.2593933
Abstract: Although different approaches to decision-making in self-adaptive systems have shown their effectiveness in the past by factoring in predictions about the system and its environment (e.g., resource availability), no proposal considers the latency associated with the execution of tactics upon the target system. However, different adaptation tactics can take different amounts of time until their effects can be observed. In reactive adaptation, ignoring adaptation tactic latency can lead to suboptimal adaptation decisions (e.g., activating a server that takes more time to boot than the transient spike in traffic that triggered its activation). In proactive adaptation, taking adaptation latency into account is necessary to get the system into the desired state to deal with an upcoming situation. In this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the potential benefits of employing dierent types of algorithms for self-adaptation. In particular, we apply this technique to show the potential benefit of considering adaptation tactic latency in proactive adaptation algorithms. Our results show that factoring in tactic latency in decision making improves the outcome of adaptation. We also present an algorithm to do proactive adaptation that considers tactic latency, and show that it achieves higher utility than an algorithm that under the assumption of no latency is optimal.
Keywords: Latency, Proactive adaptation, Stochastic multiplayer games (ID#: 15-5887)
URL: http://doi.acm.org/10.1145/2593929.2593933

 

Chunyao Song, Tingjian Ge. “Aroma: A New Data Protection Method with Differential Privacy and Accurate Query Answering.” CIKM '14 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, November 2014, Pages 1569-1578. doi:10.1145/2661829.2661886
Abstract: We propose a new local data perturbation method called Aroma. We first show that Aroma is sound in its privacy protection. For that, we devise a realistic privacy game, called the exposure test. We prove that the αβ algorithm, a previously proposed method that is most closely related to Aroma, performs poorly under the exposure test and fails to provide sufficient privacy in practice. Moreover, any data protection method that satisfies ε-differential privacy will succeed in the test. By proving that Aroma satisfies ε-differential privacy, we show that Aroma offers strong privacy protection. We then demonstrate the utility of Aroma by proving that its estimator has significantly smaller errors than the previous state-of-the-art algorithms such as αβ, AM, and FRAPP. We carry out a systematic empirical study using real-world data to evaluate Aroma, which shows its clear advantages over previous methods.
Keywords: data perturbation, differential privacy, query (ID#: 15-5888)
URLhttp://doi.acm.org/10.1145/2661829.2661886

 

Florian Hahn, Florian Kerschbaum. “Searchable Encryption with Secure and Efficient Updates.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 310-320.  doi:10.1145/2660267.2660297
Abstract: Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient -- i.e. have sub-linear search time --, dynamic -- i.e. allow updates -- and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
Keywords: dynamic searchable encryption, searchable encryption, secure index, update (ID#: 15-5889)
URL: http://doi.acm.org/10.1145/2660267.2660297

 

Itay Berman, Iftach Haitner, Aris Tentes. “Coin Flipping of Any Constant Bias Implies One-Way Functions.” STOC '14 Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 398-407. doi:10.1145/2591796.2591845
Abstract: We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias [EQUATION] -- o(1) ≈ .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.
Keywords: coin-flipping protocols, minimal hardness assumptions, one-way functions (ID#: 15-5890)
URL: http://doi.acm.org/10.1145/2591796.2591845

 

Hossain Shahriar, Hisham M. Haddad. “Content Provider Leakage Vulnerability Detection in Android Applications.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 359. doi:10.1145/2659651.2659716
Abstract: Although much research effort has focused on Android malware detection, very little attention has been given to implementation-level vulnerabilities. This paper focuses on Content Provider Leakage vulnerability that can be exploited by viewing or editing sensitive data through malware. We present a new technique for detecting content provider leakage vulnerability. We propose Kullback-Leibler Divergence (KLD) as a measure to detect the content provider leakage vulnerability. In particular, our contribution includes the development of a set of elements and mapping the elements to programming principles for secure implementation of content provider classes. These elements are captured from the implementation to form the initial population set. The population set is used to measure the divergence of a newly implemented application with content provider to identify potential vulnerabilities. We also apply a back-off smoothing technique to compute the KLD value. We implement a java prototype tool to evaluate a set of content provider implementations to show the effectiveness of the proposed approach. The initial results show that by choosing an appropriate threshold level, KLD is an effective method for detecting content provider leakage vulnerability.
Keywords: Android Application, Content Provider Vulnerability, Kullback-Leibler Divergence, SQL Injection, Secure Programming (ID#: 15-5891)
URLhttp://doi.acm.org/10.1145/2659651.2659716

 

Yunhua He, Limin Sun, Zhi Li, Hong Li, Xiuzhen Cheng. “An Optimal Privacy-Preserving Mechanism for Crowdsourced Traffic Monitoring.” FOMC '14 Proceedings of the 10th ACM International Workshop on Foundations of Mobile Computing, August 2014, Pages 11-18. doi:10.1145/2634274.2634275
Abstract: Crowdsourced traffic monitoring employs ubiquitous smartphone users to upload their GPS samples for traffic estimation and prediction. The accuracy of traffic estimation and prediction depends on the number of uploaded samples; but more samples from a user increases the probability of the user being tracked or identified, which raises a significant privacy concern. In this paper, we propose a privacy-preserving upload mechanism that can meet users\textquoteright~diverse privacy requirements while guaranteeing the traffic estimation quality. In this mechanism, the user upload decision process is formalized as a mutual objective optimization problem (user location privacy and traffic service quality) based on an incomplete information game model, in which each player can autonomously decide whether to upload or not to balance the live traffic service quality and its own location privacy for utility maximization. We theoretically prove the incentive compatibility of our proposed mechanism, which can motivate users to follow the game rules. The effectiveness of the proposed mechanism is verified by a simulation study based on real world traffic data.
Keywords: crowdsourcing, game theory, location privacy (ID#: 15-5892)
URL: http://doi.acm.org/10.1145/2634274.2634275

 

Kevin M. Carter, James F. Riordan, Hamed Okhravi. “A Game Theoretic Approach to Strategy Determination for Dynamic Platform Defenses.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 21-30. doi:10.1145/2663474.2663478
Abstract: Moving target defenses based on dynamic platforms have been proposed as a way to make systems more resistant to attacks by changing the properties of the deployed platforms. Unfortunately, little work has been done on discerning effective strategies for the utilization of these systems, instead relying on two generally false premises: simple randomization leads to diversity and platforms are independent. In this paper, we study the strategic considerations of deploying a dynamic platform system by specifying a relevant threat model and applying game theory and statistical analysis to discover optimal usage strategies. We show that preferential selection of platforms based on optimizing platform diversity approaches the statistically optimal solution and significantly outperforms simple randomization strategies. Counter to popular belief, this deterministic strategy leverages fewer platforms than may be generally available, which increases system security.
Keywords: game theory, moving target, system diversity (ID#: 15-5893)
URLhttp://doi.acm.org/10.1145/2663474.2663478

 

Eli A. Meirom, Shie Mannor, Ariel Orda. “Network Formation Games with Heterogeneous Players and the Internet Structure.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 735-752. doi:10.1145/2600057.2602862
Abstract: We study the structure and evolution of the Internet's Autonomous System (AS) interconnection topology as a game with heterogeneous players. In this network formation game, the utility of a player depends on the network structure, e.g., the distances between nodes and the cost of links. We analyze static properties of the game, such as the prices of anarchy and stability and provide explicit results concerning the generated topologies. Furthermore, we discuss dynamic aspects, demonstrating linear convergence rate and showing that only a restricted subset of equilibria is feasible under realistic dynamics. We also consider the case where utility (or monetary) transfers are allowed between the players.
Keywords: dynamic network formation games, game theory, inter-as topology; as heterogeneity, internet evolution (ID#: 15-5894)
URL: http://doi.acm.org/10.1145/2600057.2602862

 

Jianye Hao, Eunsuk Kang, Daniel Jackson, Jun Sun. “Adaptive Defending Strategy for Smart Grid Attacks.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 23-30. doi:10.1145/2667190.2667195
Abstract: One active area of research in smart grid security focuses on applying game-theoretic frameworks to analyze interactions between a system and an attacker and formulate effective defense strategies. In previous work, a Nash equilibrium (NE) solution is chosen as the optimal defense strategy, which [7, 9] implies that the attacker has complete knowledge of the system and would also employ the corresponding NE strategy. In practice, however, the attacker may have limited knowledge and resources, and thus employ an attack which is less than optimal, allowing the defender to devise more efficient strategies. We propose a novel approach called an adaptive Markov Strategy (AMS) for defending a system against attackers with unknown, dynamic behaviors. The algorithm for computing an AMS is theoretically guaranteed to converge to a best response strategy against any stationary attacker, and also converge to a Nash equilibrium if the attacker is sufficiently intelligent to employ the AMS to launch the attack. To evaluate the effectiveness of an AMS in smart grid systems, we study a class of data integrity attacks that involve injecting false voltage information into a substation, with the goal of causing load shedding (and potentially a blackout). Our preliminary results show that the amount of load shedding costs can be significantly reduced by employing an AMS over a NE strategy.
Keywords: adaptive learning, data injection, markov games, smart grid security (ID#: 15-5895)
URL: http://doi.acm.org/10.1145/2667190.2667195

 

Euijin Choo, Jianchun Jiang, Ting Yu. “COMPARS: Toward an Empirical Approach for Comparing the Resilience of Reputation Systems.” CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 87-98. doi:10.1145/2557547.2557565
Abstract: Reputation is a primary mechanism for trust management in decentralized systems. Many reputation-based trust functions have been proposed in the literature. However, picking the right trust function for a given decentralized system is a non-trivial task. One has to consider and balance a variety of factors, including computation and communication costs, scalability and resilience to manipulations by attackers. Although the former two are relatively easy to evaluate, the evaluation of resilience of trust functions is challenging. Most existing work bases evaluation on static attack models, which is unrealistic as it fails to reflect the adaptive nature of adversaries (who are often real human users rather than simple computing agents). In this paper, we highlight the importance of the modeling of adaptive attackers when evaluating reputation-based trust functions, and propose an adaptive framework -- called COMPARS -- for the evaluation of resilience of reputation systems. Given the complexity of reputation systems, it is often difficult, if not impossible, to exactly derive the optimal strategy of an attacker. Therefore, COMPARS takes a practical approach that attempts to capture the reasoning process of an attacker as it decides its next action in a reputation system. Specifically, given a trust function and an attack goal, COMPARS generates an attack tree to estimate the possible outcomes of an attacker's action sequences up to certain points in the future. Through attack trees, COMPARS simulates the optimal attack strategy for a specific reputation function f, which will be used to evaluate the resilience of f. By doing so, COMPARS allows one to conduct a fair and consistent comparison of different reputation functions.
Keywords: evaluation framework, reputation system, resilience, trust functions (ID#: 15-5896)
URLhttp://doi.acm.org/10.1145/2557547.2557565

 

Ryan M. Rogers, Aaron Roth. “Asymptotically Truthful Equilibrium Selection in Large Congestion Games.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 771-782. doi:10.1145/2600057.2602856
Abstract: Studying games in the complete information model makes them analytically tractable. However, large n player interactions are more realistically modeled as games of incomplete information, where players may know little to nothing about the types of other players. Unfortunately, games in incomplete information settings lose many of the nice properties of complete information games: the quality of equilibria can become worse, the equilibria lose their ex-post properties, and coordinating on an equilibrium becomes even more difficult. Because of these problems, we would like to study games of incomplete information, but still implement equilibria of the complete information game induced by the (unknown) realized player types. This problem was recently studied by Kearns et al. [Kearns et al. 2014], and solved in large games by means of introducing a weak mediator: their mediator took as input reported types of players, and output suggested actions which formed a correlated equilibrium of the underlying game. Players had the option to play independently of the mediator, or ignore its suggestions, but crucially, if they decided to opt-in to the mediator, they did not have the power to lie about their type. In this paper, we rectify this deficiency in the setting of large congestion games. We give, in a sense, the weakest possible mediator: it cannot enforce participation, verify types, or enforce its suggestions. Moreover, our mediator implements a Nash equilibrium of the complete information game. We show that it is an (asymptotic) ex-post equilibrium of the incomplete information game for all players to use the mediator honestly, and that when they do so, they end up playing an approximate Nash equilibrium of the induced complete information game. In particular, truthful use of the mediator is a Bayes-Nash equilibrium in any Bayesian game for any prior.
Keywords: algorithms, differential privacy, mechanism design (ID#: 15-5897)
URL: http://doi.acm.org/10.1145/2600057.2602856

 

Minzhe Guo, Prabir Bhattacharya. “Diverse Virtual Replicas for Improving Intrusion Tolerance in Cloud.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 41-44. doi:10.1145/2602087.2602116
Abstract: Intrusion tolerance is important for services in cloud to continue functioning while under attack. Byzantine fault-tolerant replication is considered a fundamental component of intrusion tolerant systems. However, the monoculture of replicas can render the theoretical properties of Byzantine fault-tolerant system ineffective, even when proactive recovery techniques are employed. This paper exploits the design diversity available from off-the-shelf operating system products and studies how to diversify the configurations of virtual replicas for improving the resilience of the service in the presence of attacks. A game-theoretic model is proposed for studying the optimal diversification strategy for the system defender and an efficient algorithm is designed to approximate the optimal defense strategies in large games.
Keywords: diversity, intrusion tolerance, virtual replica (ID#: 15-5898)
URL: http://doi.acm.org/10.1145/2602087.2602116

 

Gilles Barthe, François Dupressoir, Pierre-Alain Fouque, Benjamin Grégoire, Jean-Christophe Zapalowicz. “Synthesis of Fault Attacks on Cryptographic Implementations.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1016-1027. doi:10.1145/2660267.2660304
Abstract: Fault attacks are attacks in which an adversary with physical access to a cryptographic device, say a smartcard, tampers with the execution of an algorithm to retrieve secret material. Since the seminal Bellcore attack on modular exponentiation, there has been extensive work to discover new fault attacks against cryptographic schemes and develop countermeasures against such attacks. Originally focused on high-level algorithmic descriptions, these efforts increasingly focus on concrete implementations. While lowering the abstraction level leads to new fault attacks, it also makes their discovery significantly more challenging. In order to face this trend, it is therefore desirable to develop principled, tool-supported approaches that allow a systematic analysis of the security of cryptographic implementations against fault attacks. We propose, implement, and evaluate a new approach for finding fault attacks against cryptographic implementations. Our approach is based on identifying implementation-independent mathematical properties, or fault conditions. We choose fault conditions so that it is possible to recover secret data purely by computing on sufficiently many data points that satisfy them. Fault conditions capture the essence of a large number of attacks from the literature, including lattice-based attacks on RSA. Moreover, they provide a basis for discovering automatically new attacks: using fault conditions, we specify the problem of finding faulted implementations as a program synthesis problem. Using a specialized form of program synthesis, we discover multiple faulted attacks on RSA and ECDSA. Several of the attacks found by our tool are new, and of independent interest.
Keywords: automated proofs, fault attacks, program synthesis, program verification (ID#: 15-5899)
URL: http://doi.acm.org/10.1145/2660267.2660304

 

Christian Kroer, Tuomas Sandholm. “Extensive-Form Game Abstraction With Bounds.” EC '14 Proceedings of the Fifteenth ACM Conference on Economics and Computation, June 2014, Pages 621-638. doi:10.1145/2600057.2602905
Abstract: Abstraction has emerged as a key component in solving extensive-form games of incomplete information. However, lossless abstractions are typically too large to solve, so lossy abstraction is needed. All prior lossy abstraction algorithms for extensive-form games either 1) had no bounds on solution quality or 2) depended on specific equilibrium computation approaches, limited forms of abstraction, and only decreased the number of information sets rather than nodes in the game tree. We introduce a theoretical framework that can be used to give bounds on solution quality for any perfect-recall extensive-form game. The framework uses a new notion for mapping abstract strategies to the original game, and it leverages a new equilibrium refinement for analysis. Using this framework, we develop the first general lossy extensive-form game abstraction method with bounds. Experiments show that it finds a lossless abstraction when one is available and lossy abstractions when smaller abstractions are desired. While our framework can be used for lossy abstraction, it is also a powerful tool for lossless abstraction if we set the bound to zero. Prior abstraction algorithms typically operate level by level in the game tree. We introduce the extensive-form game tree isomorphism and action subset selection problems, both important problems for computing abstractions on a level-by-level basis. We show that the former is graph isomorphism complete, and the latter NP-complete. We also prove that level-by-level abstraction can be too myopic and thus fail to find even obvious lossless abstractions.
Keywords: abstraction, equilibrium finding, extensive-form game (ID#: 15-5900)
URL: http://doi.acm.org/10.1145/2600057.2602905

 

Carlos Barreto, Alvaro A. Cárdenas, Nicanor Quijano, Eduardo Mojica-Nava. “CPS: Market Analysis of Attacks Against Demand Response in the Smart Grid.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 136-145. doi:10.1145/2664243.2664284
Abstract: Demand response systems assume an electricity retail-market with strategic electricity consuming agents. The goal in these systems is to design load shaping mechanisms to achieve efficiency of resources and customer satisfaction. Recent research efforts have studied the impact of integrity attacks in simplified versions of the demand response problem, where neither the load consuming agents nor the adversary are strategic. In this paper, we study the impact of integrity attacks considering strategic players (a social planner or a consumer) and a strategic attacker. We identify two types of attackers: (1) a malicious attacker who wants to damage the equipment in the power grid by producing sudden overloads, and (2) a selfish attacker that wants to defraud the system by compromising and then manipulating control (load shaping) signals. We then explore the resiliency of two different demand response systems to these fraudsters and malicious attackers. Our results provide guidelines for system operators deciding which type of demand-response system they want to implement, how to secure them, and directions for detecting these attacks.
Keywords: (not provided) (ID#: 15-5901)
URLhttp://doi.acm.org/10.1145/2664243.2664284

 

Hongxin Hu, Gail-Joon Ahn, Ziming Zhao, Dejun Yang. “Game Theoretic Analysis of Multiparty Access Control in Online Social Networks.” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 93-102.  doi:10.1145/2613087.2613097
Abstract: Existing online social networks (OSNs) only allow a single user to restrict access to her/his data but cannot provide any mechanism to enforce privacy concerns over data associated with multiple users. This situation leaves privacy conflicts largely unresolved and leads to the potential disclosure of users' sensitive information. To address such an issue, a MultiParty Access Control (MPAC) model was recently proposed, including a systematic approach to identify and resolve privacy conflicts for collaborative data sharing in OSNs. In this paper, we take another step to further study the problem of analyzing the strategic behavior of rational controllers in multiparty access control, where each controller aims to maximize her/his own benefit by adjusting her/his privacy setting in collaborative data sharing in OSNs. We first formulate this problem as a multiparty control game and show the existence of unique Nash Equilibrium (NE) which is critical because at an NE, no controller has any incentive to change her/his privacy setting. We then present algorithms to compute the NE and prove that the system can converge to the NE in only a few iterations. A numerical analysis is also provided for different scenarios that illustrate the interplay of controllers in the multiparty control game. In addition, we conduct user studies of the multiparty control game to explore the gap between game theoretic approaches and real human behaviors.
Keywords: game theory, multiparty access control, social networks (ID#: 15-5902)
URL: http://doi.acm.org/10.1145/2613087.2613097

 

Florian Kerschbaum, Axel Schroepfer. “Optimal Average-Complexity Ideal-Security Order-Preserving Encryption.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 275-286. doi:10.1145/2660267.2660277
Abstract: Order-preserving encryption enables performing many classes of queries -- including range queries -- on encrypted databases. Popa et al. recently presented an ideal-secure order-preserving encryption (or encoding) scheme, but their cost of insertions (encryption) is very high. In this paper we present an also ideal-secure, but significantly more efficient order-preserving encryption scheme. Our scheme is inspired by Reed's referenced work on the average height of random binary search trees. We show that our scheme improves the average communication complexity from O(n log n) to O(n) under uniform distribution. Our scheme also integrates efficiently with adjustable encryption as used in CryptDB. In our experiments for database inserts we achieve a performance increase of up to 81% in LANs and 95% in WANs.
Keywords: adjustable encryption, efficiency, ideal security, in-memory column database, indistinguishability, order-preserving encryption (ID#: 15-5903)
URLhttp://doi.acm.org/10.1145/2660267.2660277

 

Rui Zhuang, Scott A. DeLoach, Xinming Ou. “Towards a Theory of Moving Target Defense.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 31-40. doi:10.1145/2663474.2663479
Abstract: The static nature of cyber systems gives attackers the advantage of time. Fortunately, a new approach, called the Moving Target Defense (MTD) has emerged as a potential solution to this problem. While promising, there is currently little research to show that MTD systems can work effectively in real systems. In fact, there is no standard definition of what an MTD is, what is meant by attack surface, or metrics to define the effectiveness of such systems. In this paper, we propose an initial theory that will begin to answer some of those questions. The paper defines the key concepts required to formally talk about MTD systems and their basic properties. It also discusses three essential problems of MTD systems, which include the MTD Problem (or how to select the next system configuration), the Adaptation Selection Problem, and the Timing Problem. We then formalize the MTD Entropy Hypothesis, which states that the greater the entropy of the system's configuration, the more effective the MTD system.
Keywords: computer security, moving target defense, network security, science of security (ID#: 15-5904)
URLhttp://doi.acm.org/10.1145/2663474.2663479

 

Rattikorn Hewett, Sudeeptha Rudrapattana, Phongphun Kijsanayothin. “Cyber-Security Analysis of Smart Grid SCADA Systems with Game Models.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 109-112. doi:10.1145/2602087.2602089
Abstract: Smart grid SCADA (Supervisory Control and Data Acquisition) systems are key drivers to monitor, control and manage critical processes for the delivery and transmission of electricity in smart grids. Security attacks to such systems can have devastating effects on the functionality of the smart grids leading to electrical blackouts, economic losses or even fatalities. This paper presents an analytical game theoretic approach to analyzing security of SCADA smart grids by constructing a model of sequential, nonzero sum, two-player game between an attacker and a security administrator. The distinction of our work is the proposed development of game payoff formulae. A decision analysis can then be obtained by applying backward induction technique on the game tree derived from the proposed payoffs. The paper describes the development of the game payoffs and illustrates its analysis on a real-world scenario of Sybil and node compromised attacks at the sensor level of the smart grid SCADA systems.
Keywords: SCADA, SCADA security, game theory, payoffs, sequential games, utility function (ID#: 15-5905)
URL: http://doi.acm.org/10.1145/2602087.2602089  

 

Umesh Vazirani, Thomas Vidick. “Robust Device Independent Quantum Key Distribution.” ITCS '14 Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, January 2014, Pages 35-36. doi:10.1145/2554797.2554802
Abstract: Quantum cryptography is based on the discovery that the laws of quantum mechanics allow levels of security that are impossible to replicate in a classical world. Can such levels of security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This fundamental question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated [9]. We answer this challenge affirmatively by exhibiting a robust protocol for DIQKD and rigorously proving its security. The protocol achieves a linear key rate while tolerating a constant noise rate in the devices. The security proof assumes only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and any adversary's laboratory. In particular, we emphasize that the devices may have quantum memory. All previous proofs of security relied either on the use of many independent pairs of devices, or on the absence of noise. To prove security for a DIQKD protocol it is necessary to establish at least that the generated key is truly random even in the presence of a quantum adversary. This is already a challenge, one that was recently resolved. DIQKD is substantially harder, since now the protocol must also guarantee that the key is completely secret from the quantum adversary's point of view, and the entire protocol is robust against noise; this in spite of the substantial amounts of classical information leaked to the adversary throughout the protocol, as part of the error estimation and information reconciliation procedures. Our proof of security builds upon a number of techniques, including randomness extractors that are secure against quantum storage as well as ideas originating in the coding strategy used in the proof of the Holevo-Schumacher-Westmoreland theorem which we apply to bound correlations across multiple rounds in a way not unrelated to information-theoretic proofs of the parallel repetition property for multiplayer games. Our main result can be understood as a new bound on monogamy of entanglement in the type of complex scenario that arises in a key distribution protocol. Precise statements of our results and detailed proofs can be found at arXiv:1210.1810.
Keywords: certified randomness, chsh game, device-independence, monogamy, quantum key distribution (ID#: 15-5906)
URLhttp://doi.acm.org/10.1145/2554797.2554802

 

George Theodorakopoulos, Reza Shokri, Carmela Troncoso, Jean-Pierre Hubaux, Jean-Yves Le Boudec. “Prolonging the Hide-and-Seek Game: Optimal Trajectory Privacy for Location-Based Services.” WPES '14 Proceedings of the 13th Workshop on Privacy in the Electronic Society, November 2014, Pages 73-82. doi:10.1145/2665943.2665946
Abstract: Human mobility is highly predictable. Individuals tend to only visit a few locations with high frequency, and to move among them in a certain sequence reflecting their habits and daily routine. This predictability has to be taken into account in the design of location privacy preserving mechanisms (LPPMs) in order to effectively protect users when they expose their whereabouts to location-based services (LBSs) continuously. In this paper, we describe a method for creating LPPMs tailored to a user's mobility profile taking into her account privacy and quality of service requirements. By construction, our LPPMs take into account the sequential correlation across the user's exposed locations, providing the maximum possible trajectory privacy, i.e., privacy for the user's past, present location, and expected future locations. Moreover, our LPPMs are optimal against a strategic adversary, i.e., an attacker that implements the strongest inference attack knowing both the LPPM operation and the user's mobility profile. The optimality of the LPPMs in the context of trajectory privacy is a novel contribution, and it is achieved by formulating the LPPM design problem as a Bayesian Stackelberg game between the user and the adversary. An additional benefit of our formal approach is that the design parameters of the LPPM are chosen by the optimization algorithm.
Keywords: bayesian stackelberg game, location privacy, location transition privacy, optimal location obfuscation, privacy-utility tradeoff, trajectory privacy (ID#: 15-5907)
URL: http://doi.acm.org/10.1145/2665943.2665946

 

Martin Chapman, Gareth Tyson, Peter McBurney, Michael Luck, Simon Parsons. “Playing Hide-and-Seek: An Abstract Game for Cyber Security.” ACySE '14 Proceedings of the 1st International Workshop on Agents and CyberSecurity, May 2014, Article No. 3. doi:10.1145/2602945.2602946
Abstract: In order to begin to solve many of the problems in the domain of cyber security, they must first be transformed into abstract representations, free of complexity and paralysing technical detail. We believe that for many classic security problems, a viable transformation is to consider them as an abstract game of hide-and-seek. The tools required in this game -- such as strategic search and an appreciation of an opponent's likely strategies -- are very similar to the tools required in a number of cyber security applications, and thus developments in strategies for this game can certainly benefit the domain. In this paper we consider hide-and-seek as a formal game, and consider in depth how it is allegorical to the cyber domain, particularly in the problems of attack attribution and attack pivoting. Using this as motivation, we consider the relative performance of several hide and seek strategies using an agent-based simulation model, and present our findings as an initial insight into how to proceed with the solution of real cyber issues.
Keywords: agent-based modelling, cyber security, hide-and-seek games, search games (ID#: 15-5908)
URL: http://doi.acm.org/10.1145/2602945.2602946


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Measurement and Metrics: Testing, 2014

 

 
SoS Logo

Measurement and Metrics: Testing, 2014

 

Measurement and metrics are hard problems in the Science of Security. The research cited here looks at methods and techniques of testing valid measurement. This work was presented in 2014.


Awad, F.; Taqieddin, E.; Mowafi, M.; Banimelhem, O.; AbuQdais, A., "A Simulation Testbed to Jointly Exploit Multiple Image Compression Techniques for Wireless Multimedia Sensor Networks," Wireless Communications Systems (ISWCS), 2014 11th International Symposium on, vol., no., pp. 905, 911, 26-29 Aug. 2014. doi:10.1109/ISWCS.2014.6933482
Abstract: As the demand for large-scale wireless multimedia sensor networks increases, so does the need for well-designed protocols that optimize the utilization of available networks resources. This requires experimental testing for realistic performance evaluation and design tuning. However, experimental testing of large-scale wireless networks using hardware testbeds is usually very hard to perform due to the need for collecting and monitoring the performance metrics data for multiple sensor nodes all at the same time, especially the node's energy consumption data. On the other hand, pure simulation testing may not accurately replicate the real-life scenarios, especially those parameters that are related to the wireless signal behavior in special environments. Therefore, this work attempts to close this gap between experimental and simulation testing. This paper presents a scalable simulation testbed that attempts to mimic our previously designed small-scale hardware testbed for wireless multimedia sensor networks by tuning the simulation parameters to match the real-life measurements obtained via experimental testing. The proposed simulation testbed embeds the JPEG and JPEG2000 image compression algorithms and potentially allows for network-controlled image compression and transmission decisions. The simulation results show very close match to the small-scale experimental testing as well as to the hypothetical large-scale extensions that were based on the experimental results.
Keywords: data compression; energy consumption; image coding; multimedia communication; protocols; wireless sensor networks; JPEG; JPEG2000 image compression algorithms; hardware testbeds; hypothetical large-scale extensions; jointly exploit multiple image compression techniques; large-scale wireless multimedia sensor networks; multiple sensor nodes; network resources; network-controlled image compression; node energy consumption data; performance metrics data collection; performance metrics data monitoring; scalable simulation testbed; small-scale hardware testbed; transmission decisions; well-designed protocols; wireless signal behavior; Energy consumption; Hardware; Image coding; Multimedia communication; Routing; Transform coding; Wireless sensor networks; Imote2; JPEG; JPEG2000; Simulation;Testbed; WMSN (ID#: 15-6045)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933482&isnumber=6933305

 

Kowtko, M.A., "Biometric Authentication for Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol., no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845213
Abstract: In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized  user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process.
Keywords: authorisation; biometrics (access control); invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA; TeleHealth industry; US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#: 15-6046)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845213&isnumber=6845183

 

Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol, no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845212
Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software. We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-ofbreed software assurance methods to prevent the v- lnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC;US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-6047)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183

 

Yihai Zhu; Jun Yan; Yufei Tang; Sun, Y.L.; Haibo He, "Coordinated Attacks Against Substations and Transmission Lines in Power Grids," Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 655, 661, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036882
Abstract: Vulnerability analysis on the power grid has been widely conducted from the substation-only and transmission-line-only perspectives. In order words, it is considered that attacks can occur on substations or transmission lines separately. In this paper, we naturally extend existing two perspectives and introduce the joint-substation-transmission-line's perspective, which means attacks can concurrently occur on substations and transmission lines. Vulnerabilities are referred to as these multiple-component combinations that can yield large damage to the power grid. One such combination consists of substations, transmission lines, or both. The new perspective is promising to discover more power grid vulnerabilities. In particular, we conduct the vulnerability analysis on the IEEE 39 bus system. Compared with known substation-only/transmission-line-only vulnerabilities, joint-substation-transmission-line vulnerabilities account for the largest percentage. Referring to three-component vulnerabilities, for instance, joint-substation-transmission-line vulnerabilities account for 76.06%; substation-only and transmission-line-only vulnerabilities account for 10.96% and 12.98%, respectively. In addition, we adopt two existing metrics, degree and load, to study the joint-substation-transmission-line attack strategy. Generally speaking, the joint-substation-transmission-line attack strategy based on the load metric has better attack performance than comparison schemes.
Keywords: power grids; power transmission reliability; substations; IEEE 39 bus system; coordinated attacks; joint-substation-transmission-line perspective; joint-substation-transmission-line vulnerabilities; load metric; multiple-component combinations; power grid vulnerabilities; vulnerability analysis; Benchmark testing; Measurement; Power grids; Power system faults; Power system protection; Power transmission lines; Substations; Attack; Cascading failures; Power grid security; Vulnerability analysis (ID#: 15-6048)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036882&isnumber=7036769

 

Duncan, I.; De Muijnck-Hughes, J., "Security Pattern Evaluation," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, vol., no., pp. 428, 429, 7-11 April 2014. doi:10.1109/SOSE.2014.61
Abstract: Current Security Pattern evaluation techniques are demonstrated to be incomplete with respect to quantitative measurement and comparison. A proposal for a dynamic testbed system is presented as a potential mechanism for evaluating patterns within a constrained environment.
Keywords: pattern classification; security of data; dynamic testbed system; security pattern evaluation; Complexity theory; Educational institutions; Measurement; Security; Software; Software reliability; Testing; evaluation; metrics; security patterns; testing (ID#: 15-6049)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830943&isnumber=6825948

 

Sanchez, A.B.; Segura, S.; Ruiz-Cortes, A., "A Comparison of Test Case Prioritization Criteria for Software Product Lines," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, vol., no., pp. 41, 50, March 31 2014 - April 4 2014. doi:10.1109/ICST.2014.15
Abstract: Software Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has been paid to the order in which the products are tested. Test case prioritization techniques reorder test cases to meet a certain performance goal. For instance, testers may wish to order their test cases in order to detect faults as soon as possible, which would translate in faster feedback and earlier fault correction. In this paper, we explore the applicability of test case prioritization techniques to SPL testing. We propose five different prioritization criteria based on common metrics of feature models and we compare their effectiveness in increasing the rate of early fault detection, i.e. a measure of how quickly faults are detected. The results show that different orderings of the same SPL suite may lead to significant differences in the rate of early fault detection. They also show that our approach may contribute to accelerate the detection of faults of SPL test suites based on combinatorial testing.
Keywords: fault diagnosis; program testing; SPL test suites; SPL testing; combinatorial testing; fault detection; software product line testing; test case prioritization criteria comparison; test case prioritization techniques; Analytical models; Complexity theory; Fault detection; Feature extraction; Measurement; Security; Testing; Software product lines; automated analysis; feature models. (ID#: 15-6050)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823864&isnumber=6823846

 

Zabasta, A.; Casaliccio, E.; Kunicina, N.; Ribickis, L., "A Numerical Model for Evaluation Power Outages Impact on Water Infrastructure Services Sustainability," Power Electronics and Applications (EPE'14-ECCE Europe), 2014 16th European Conference on, vol., no., pp.1,10, 26-28 Aug. 2014. doi:10.1109/EPE.2014.6910703
Abstract: Critical infrastructure's (CI) (electricity, heat, water, information and communication technology networks) security, stability and reliability are closely related to the interaction phenomenon. Due to the increasing amount of data transferred, increases dependence on telecommunications and internet services, the data integrity and security is becoming a very important aspect for the utility services providers and energy suppliers. In such circumstances, the need is increasing for methods and tools that enable infrastructure managers to evaluate and predict their critical infrastructure operations as the failures, emergency or service degradation occur in other related infrastructures. Using a simulation model, is experimentally tested a method that allows to explore the water supply network nodes the average down time dependence on the battery life time and the battery replacement time cross-correlations, within the parameters set, when outages in power infrastructure arise and taking into account also the impact of telecommunication nodes. The model studies the real case of Latvian city Ventspils. The proposed approach for the analysis of critical infrastructures interdependencies will be useful for practical adoption of methods, models and metrics for CI operators and stakeholders.
Keywords: critical infrastructures; polynomial approximation; power system reliability; power system security; power system stability; water supply; CI operators; average down time dependence; battery life time; battery replacement time cross-correlations; critical infrastructure operations; critical infrastructure security; critical infrastructures interdependencies; data integrity; data security; energy suppliers; infrastructure managers; interaction phenomenon; internet services; power infrastructure outages; stakeholders; telecommunication nodes; utility services providers; water supply network nodes; Analytical models; Batteries; Mathematical model; Measurement; Power supplies; Telecommunications; Unified modeling language; Estimation technique; Fault tolerance; Modelling; Simulation (ID#: 15-6051)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910703&isnumber=6910682

 

Hemanidhi, A.; Chimmanee, S.; Sanguansat, P., "Network Risk Evaluation from Security Metric of Vulnerability Detection Tools," TENCON 2014 - 2014 IEEE Region 10 Conference, vol., no., pp. 1, 6, 22-25 Oct. 2014. doi:10.1109/TENCON.2014.7022358
Abstract: Network Security is always a major concern in any organizations. To ensure that the organization network is well prevented from attackers, vulnerability assessment and penetration testing are implemented regularly. However, it is a highly time-consuming procedure to audit and analysis these testing results depending on administrator's expertise. Thus, security professionals prefer proactive-automatic vulnerability detection tools to identify vulnerabilities before they are exploited by an adversary. Although these vulnerability detection tools show that they are very useful for security professionals to audit and analysis much faster and more accurate, they have some important weaknesses as well. They only identify surface vulnerabilities and are unable to address the overall risk level of the scanned network. Also, they often use different standard for network risk level classification which habitually related to some organizations or vendors. Thus, these vulnerability detection tools are likely to, more or less, classify risk evaluation biasedly. This article presents a generic idea of “Network Risk Metric” as an unbiased risk evaluation from several vulnerability detection tools. In this paper, NetClarity (hardware-based), Nessus (software-based), and Retina (software-based) are implemented on two networks from an IT department of the Royal Thai Army (RTA). The proposed metric is applied for evaluating overall network risk from these three vulnerability detection tools. The result is a more accurate risk evaluation for each network.
Keywords: business data processing; computer crime; computer network performance evaluation; computer network security; IT department; Nessus; NetClarity; RTA; Retina; Royal Thai Army; attackers; hardware-based; network risk evaluation; network risk level classification; network risk metric; network security; organization network; proactive-automatic vulnerability detection tools; security metric; security professionals; software-based; unbiased risk evaluation; vulnerabilities identification; vulnerability assessment; vulnerability penetration testing; Equations; Measurement; Retina; Security; Servers; Software; Standards organizations; Network Security; Risk Evaluation; Security Metrics; Vulnerability Detection (ID#: 15-6052)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022358&isnumber=7021863

 

Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., "OutMet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-6053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725

 

Gaurav, C.; Chandramouleeswaran, D.; Khanam, R., "Progressive Testbed Application for Performance Analysis in Real Time Ad Hoc Networks Using SAP HANA," Advances in Computing and Communications (ICACC), 2014 Fourth International Conference on, vol., no., pp. 171, 174, 27-29 Aug. 2014. doi:10.1109/ICACC.2014.48
Abstract: This paper proposes and subsequently delineates quantification of network security metrics using software defined networking approach in real time using a progressive testbed. This comprehensive testbed implements computation of trust values which lend sentient decision making qualities to the participant nodes in a network and fortify it against threats like blackhole and flooding attacks. AODV and OLSR protocols were tested in real time under ideal and malicious environment using the testbed as the controlling point. With emphasis on reliability, interpreting voluminous data, monitoring attacks immediately with negligible time lag, the paper concludes by justifying the use of SAP HANA and UI5 for the testbed.
Keywords: ad hoc networks; routing protocols; telecommunication security; AODV protocol; OLSR protocol; SAP HANA; network security metrics; progressive testbed; real time ad hoc networks; sentient decision making; software defined networking; trust values; Ad hoc networks; Equations; Mathematical model; Measurement; Protocols; Routing; Security; Ad-Hoc Network; HANA- High Performance Analytic Appliance; Performance Analysis; Security Metrics; Trust Model; UI5 SAP User Interface Technology (ID#: 15-6054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906017&isnumber=6905967

 

Renchi Yan; Teng Xu; Potkonjak, M., "Semantic Attacks on Wireless Medical Devices," SENSORS, 2014 IEEE, vol., no., pp. 482, 485, 2-5 Nov. 2014. doi:10.1109/ICSENS.2014.6985040
Abstract: Security of medical embedded systems is of vital importance. Wireless medical devices used in wireless health applications employ large number of sensors and are in particular susceptible to security attacks. They are often not physically secured and are usually used in hostile environments. We have developed theoretical and statistical framework for creating semantic attacks where data is altered in such a way that the consequences include incorrect medical diagnosis and treatment. Our approach maps a semantic attack to an instance of optimization problem where medical damage is maximized under constraints of the probability of detection and root cause tracing. We use a popular medical shoe to demonstrate that low energy and low cost of embedded medical devices increases the probability of successful attacks. We have proposed two types of semantic attacks, respectively pressure-based attack, and time-based attack under two scenarios, a shoe with 99 pressure sensors and a shoe with 20 pressure sensors. We test the effects of the attacks and compare them. Our results indicate that it is surprisingly easy to attack several essential medical metrics and to alter corresponding medical diagnosis.
Keywords: biomedical communication; data communication; intelligent sensors; optimisation; pressure sensors; security of data; wireless sensor networks; detection probability; low cost embedded medical devices; low energy embedded medical devices; medical embedded system security; medical shoe; optimization problem; pressure based attack; pressure sensors; root cause tracing; semantic attacks; sensor security attacks; time based attack; wireless health applications; wireless medical devices; Measurement; Medical diagnostic imaging; Medical services; Security; Semantics; Sensors; Wireless sensor networks (ID#: 15-6055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6985040&isnumber=6984913

 

Riecker, M.; Thies, D.; Hollick, M., "Measuring the Impact of Denial-of-Service Attacks on Wireless Sensor Networks," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 296, 304, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925784
Abstract: Wireless sensor networks (WSNs) are especially susceptible to denial-of-service attacks due to the resource-constrained nature of motes. We follow a systematic approach to analyze the impacts of these attacks on the network behavior; therefore, we first identify a large number of metrics easily obtained and calculated without incurring too much overhead. Next, we statistically test these metrics to assess whether they exhibit significantly different values under attack when compared to those of the baseline operation. The metrics look into different aspects of the motes and the network, for example, MCU and radio activities, network traffic statistics, and routing related information. Then, to show the applicability of the metrics to different WSNs, we vary several parameters, such as traffic intensity and transmission power. We consider the most common topologies in wireless sensor networks such as central data collection and meshed multi-hop networks by using the collection tree and the mesh protocol. Finally, the metrics are grouped according to their capability of distinction into different classes. In this work, we focus on jamming and blackhole attacks. Our experiments reveal that certain metrics are able to detect a jamming attack on all motes in the testbed, irrespective of the parameter combination, and at the highest significance value. To illustrate these facts, we use a standard testbed consisting of the widely-employed TelosB motes.
Keywords: jamming; telecommunication network routing; telecommunication network topology; telecommunication security; wireless sensor networks; TelosB motes; blackhole attack; central data collection; collection tree; denial-of-service attack; jamming attack; mesh protocol; meshed multihop network; network behavior; network topology; network traffic statistics; routing related information; wireless sensor networks; Computer crime; Jamming; Measurement; Protocols; Routing; Topology; Wireless sensor networks; Denial-of-Service; Measurements; Wireless Sensor Networks (ID#: 15-6056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925784&isnumber=6925725

 

Kundi, M.; Chitchyan, R., "Position on Metrics for Security in Requirements Engineering," Requirements Engineering and Testing (RET), 2014 IEEE 1st International Workshop on, vol., no., pp. 29, 31, 26-26 Aug. 2014. doi:10.1109/RET.2014.6908676
Abstract: A number of well-established software quality metrics are in use in code testing. It is our position that for many code-testing metrics for security equivalent requirements level metrics should be defined. Such requirements-level security metrics should be used in evaluating the quality of software security early on, in order to ensure that the resultant software system possesses the required security characteristics and quality.
Keywords: formal specification; program testing; security of data; software metrics; software quality; code-testing metrics; requirements engineering; requirements-level security metrics; software quality metrics; software security; Conferences; Security; Software measurement; Software systems; Testing (ID#: 15-6057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908676&isnumber=6908666

 

Rostami, M.; Wendt, J.B.; Potkonjak, M.; Koushanfar, F., "Quo Vadis, PUF?: Trends and Challenges of Emerging Physical-Disorder Based Security," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, vol, no., pp. 1, 6, 24-28 March 2014. doi:10.7873/DATE.2014.365
Abstract: The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures.
Keywords: cryptographic protocols; public key cryptography; PUF-based paradigms; PUF-based primitives; Quo Vadis; application domain; digital PUF; hardware realization; physical medium randomness measurement; physical unclonable function; physical-disorder-based security; public-key protocol; secret-key based protocols; security primitive; structure analysis; subjective analysis; Aging; Correlation; Hardware; NIST; Protocols; Public key (ID#: 15-6058)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800566&isnumber=6800201

 

Singh, P.; Shivani, S.; Agarwal, S., "A Chaotic Map Based DCT-SVD Watermarking Scheme for Rightful Ownership Verification," Engineering and Systems (SCES), 2014 Students Conference on, vol., no., pp. 1, 4, 28-30 May 2014. doi:10.1109/SCES.2014.6880048
Abstract: A chaotic map-based hybrid watermarking scheme incorporating the concepts of the Discrete Cosine Transform (DCT) and exploiting the stability of the singular values has been proposed here. Homogeneity Analysis of the cover image has been done to chalk out appropriates sites for embedding and thereafter, a reference image has been obtained from it. The singular values of the reference image has been modified for embedding the secret information. The Chaotic map based scrambling enhances the security of the algorithm as only the rightful owner possessing the secret key, could retrieve the actual image. Comprehensive set of attacks has been applied and robustness tested with the Normalized Cross Correlation (NCC) and Peak Signal to Noise Ratio (PSNR) metric values. High values of these metrics signify the appropriateness of the proposed methodology.
Keywords: chaos; discrete cosine transforms; image retrieval; image watermarking; singular value decomposition; NCC; PSNR metric values; chaotic map based DCT-SVD hybrid watermarking scheme; chaotic map based scrambling; cover image; discrete cosine transform; homogeneity analysis; image retrieval; normalized cross correlation; peak signal to noise ratio metric values; reference image; rightful ownership verification; secret information; singular value decomposition; Discrete cosine transforms; Image coding; Measurement; PSNR; Robustness; Transform coding; Watermarking; Chaotic Map; Discrete cosine transformation (DCT); Homogeneity Analysis; Normalized Cross Correlation (NCC); Peak Signal to Noise Ratio (PSNR); Reference Image; Singular values  (ID#: 15-6059)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880048&isnumber=6880039

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Polymorphic Worms, 2014

 

 
SoS Logo

Polymorphic Worms, 2014

 

Polymorphic worms pose a serious threat to Internet security with their ability to rapidly propagate, exploit unknown vulnerabilities, and change their own representations on each new infection or encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm making their fingerprinting very difficult. Signature-based defenses and traditional security layers miss these stealthy and persistent threats. The research presented here identifies alternative methods for identifying and responding to these worms. All citations are from 2014.


Ali Zand, Giovanni Vigna, Xifeng Yan, Christopher Kruegel. “Extracting Probable Command and Control Signatures for Detecting Botnets.” SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March, 2014, Pages 1657-1662. doi:10.1145/2554850.2554896
Abstract: Botnets, which are networks of compromised machines under the control of a single malicious entity, are a serious threat to online security. The fact that botnets, by definition, receive their commands from a single entity can be leveraged to fight them. To this end, one requires techniques that can detect command and control (C&C) traffic, as well as the servers that host C&C services. Given the knowledge of a C&C server's IP address, one can use this information to detect all hosts that attempt to contact such a server, and subsequently disinfect, disable, or block the infected machines. This information can also be used by law enforcement to take down the C&C server.  In this paper, we present a new botnet C&C signature extraction approach that can be used to find C&C communication in traffic generated by executing malware samples in a dynamic analysis system. This approach works in two steps. First, we extract all frequent strings seen in the network traffic. Second, we use a function that assigns a score to each string. This score represents the likelihood that the string is indicative of C&C traffic. This function allows us to rank strings and focus our attention on those that likely represent good C&C signatures. We apply our technique to almost 2.6 million network connections produced by running more than 1.4 million malware samples. Using our technique, we were able to automatically extract a set of signatures that are able to identify C&C traffic. Furthermore, we compared our signatures with those used by existing tools, such as Snort and BotHunter.
Keywords: (not provided) (ID#: 15-5967)
URL:  http://doi.acm.org/10.1145/2554850.2554896

 

Shahid Alam, Issa Traore, Ibrahim Sogukpinar. “Current Trends and the Future of Metamorphic Malware Detection.”  SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 411. doi:10.1145/2659651.2659670
Abstract: Dynamic binary obfuscation or metamorphism is a technique where a malware never keeps the same sequence of opcodes in the memory. This stealthy mutation technique helps a malware evade detection by today's signature-based anti-malware programs. This paper analyzes the current trends, provides future directions and reasons about some of the basic characteristics of a system for providing real-time detection of metamorphic malware. Our emphasis is on the most recent advancements and the potentials available in metamorphic malware detection, so we only cover some of the major academic research efforts carried out, including and after, the year 2006. The paper not only serves as a collection of recent references and information for easy comparison and analysis, but also as a motivation for improving the current and developing new techniques for metamorphic malware detection.
Keywords: End point security, Malware detection, Metamorphic malware, Obfuscations (ID#: 15-5968)
URL:   http://doi.acm.org/10.1145/2659651.2659670

 

Hongyu Gao, Yi Yang, Kai Bu, Yan Chen, Doug Downey, Kathy Lee, Alok Choudhary. “Spam ain't as Diverse as It Seems: Throttling OSN Spam with Templates Underneath.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 76-85. doi:10.1145/2664243.2664251
Abstract: In online social networks (OSNs), spam originating from friends and acquaintances not only reduces the joy of Internet surfing but also causes damage to less security-savvy users. Prior countermeasures combat OSN spam from different angles. Due to the diversity of spam, there is hardly any existing method that can independently detect the majority or most of OSN spam. In this paper, we empirically analyze the textual pattern of a large collection of OSN spam. An inspiring finding is that the majority (63.0%) of the collected spam is generated with underlying templates. We therefore propose extracting templates of spam detected by existing methods and then matching messages against the templates toward accurate and fast spam detection. We implement this insight through Tangram, an OSN spam filtering system that performs online inspection on the stream of user-generated messages. Tangram automatically divides OSN spam into segments and uses the segments to construct templates to filter future spam. Experimental results show that Tangram is highly accurate and can rapidly generate templates to throttle newly emerged campaigns. Specifically, Tangram detects the most prevalent template-based spam with 95.7% true positive rate, whereas the existing template generation approach detects only 32.3%. The integration of Tangram and its auxiliary spam filter achieves an overall accuracy of 85.4% true positive rate and 0.33% false positive rate.
Keywords: online social networks, spam, spam campaigns (ID#: 15-5969)
URL: http://doi.acm.org/10.1145/2664243.2664251

 

Blake Anderson, Curtis Storlie, Micah Yates, Aaron McPhall. “Automating Reverse Engineering with Machine Learning Techniques.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 103-112. doi:10.1145/2666652.2666665
Abstract: Malware continues to be an ongoing threat, with millions of unique variants created every year. Unlike the majority of this malware, Advanced Persistent Threat (APT) malware is created to target a specific network or set of networks and has a precise objective, e.g. exfiltrating sensitive data. While 0-day malware detectors are a good start, they do not help the reverse engineers better understand the threats attacking their networks. Understanding the behavior of malware is often a time sensitive task, and can take anywhere between several hours to several weeks. Our goal is to automate the task of identifying the general function of the subroutines in the function call graph of the program to aid the reverse engineers. Two approaches to model the subroutine labels are investigated, a multiclass Gaussian process and a multiclass support vector machine. The output of these methods is the probability that the subroutine belongs to a certain class of functionality (e.g., file I/O, exploit, etc.). Promising initial results, illustrating the efficacy of this method, are presented on a sample of 201 subroutines taken from two malicious families.
Keywords: computer security, gaussian processes, machine learning, malware, multiple kernel learning, support vector machines (ID#: 15-5970)
URL: http://doi.acm.org/10.1145/2666652.2666665

 

Yiming Jing, Ziming Zhao, Gail-Joon Ahn, Hongxin Hu. “Morpheus: Automatically Generating Heuristics to Detect Android Emulators.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 216-225. doi:10.1145/2664243.2664250
Abstract: Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.
Keywords: Android, emulator, malware (ID#: 15-5971)
URL:  http://doi.acm.org/10.1145/2664243.2664250

 

Jannik Pewny, Felix Schuster, Lukas Bernhard, Thorsten Holz, Christian Rossow. “Leveraging Semantic Signatures for Bug Search in Binary Programs.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 406-415. doi:10.1145/2664243.2664269
Abstract: Software vulnerabilities still constitute a high security risk and there is an ongoing race to patch known bugs. However, especially in closed-source software, there is no straightforward way (in contrast to source code analysis) to find buggy code parts, even if the bug was publicly disclosed. To tackle this problem, we propose a method called Tree Edit Distance Based Equational Matching (TEDEM) to automatically identify binary code regions that are "similar" to code regions containing a reference bug. We aim to find bugs both in the same binary as the reference bug and in completely unrelated binaries (even compiled for different operating systems). Our method even works on proprietary software systems, which lack source code and symbols. The analysis task is split into two phases. In a preprocessing phase, we condense the semantics of a given binary executable by symbolic simplification to make our approach robust against syntactic changes across different binaries. Second, we use tree edit distances as a basic block-centric metric for code similarity. This allows us to find instances of the same bug in different binaries and even spotting its variants (a concept called vulnerability extrapolation). To demonstrate the practical feasibility of the proposed method, we implemented a prototype of TEDEM that can find real-world security bugs across binaries and even across OS boundaries, such as in MS Word and the popular messengers Pidgin (Linux) and Adium (Mac OS).
Keywords: (not provided) (ID#: 15-5972)
URL: http://doi.acm.org/10.1145/2664243.2664269

 

Yaniv David, Eran Yahav.; “Tracelet-Based Code Search in Executables.” PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 349-360. doi:10.1145/2594291.2594343
Abstract: We address the problem of code search in executables. Given a function in binary form and a large code base, our goal is to statically find similar functions in the code base. Towards this end, we present a novel technique for computing similarity between functions. Our notion of similarity is based on decomposition of functions into tracelets: continuous, short, partial traces of an execution. To establish tracelet similarity in the face of low-level compiler transformations, we employ a simple rewriting engine. This engine uses constraint solving over alignment constraints and data dependencies to match registers and memory addresses between tracelets, bridging the gap between tracelets that are otherwise similar. We have implemented our approach and applied it to find matches in over a million binary functions. We compare tracelet matching to approaches based on n-grams and graphlets and show that tracelet matching obtains dramatically better precision and recall.
Keywords: static binary analysis, x86, x86-64 (ID#: 15-5973)
URL:  http://doi.acm.org/10.1145/2594291.2594343

 

Yinzhi Cao, Xiang Pan, Yan Chen, Jianwei Zhuge. “JShield: Towards Real-Time and Vulnerability-Based Detection of Polluted Drive-By Download Attacks.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 466-475. doi:10.1145/2664243.2664256
Abstract: Drive-by download attacks, which exploit vulnerabilities of web browsers to control client computers, have become a major venue for attackers. To detect such attacks, researchers have proposed many approaches such as anomaly-based [22, 23] and vulnerability-based [44, 50] detections. However, anomaly-based approaches are vulnerable to data pollution, and existing vulnerability-based approaches cannot accurately describe the vulnerability condition of all the drive-by download attacks.  In this paper, we propose a vulnerability-based approach, namely JShield, which uses novel opcode vulnerability signature, a deterministic finite automaton (DFA) with a variable pool at opcode level, to match drive-by download vulnerabilities. We investigate all the JavaScript engine vulnerabilities of web browsers from 2009 to 2014, as well as those of portable document files (PDF) readers from 2007 to 2014. JShield is able to match all of those vulnerabilities; furthermore, the overall evaluation shows that JShield is so lightweight that it only adds 2.39 percent of overhead to original execution as the median among top 500 Alexa web sites.
Keywords: (not provided) (ID#: 15-5974)
URL: http://doi.acm.org/10.1145/2664243.2664256

 

Smita Naval, Vijay Laxmi, Neha Gupta, Manoj Singh Gaur, Muttukrishnan Rajarajan. “Exploring Worm Behaviors using DTW.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 379. doi:10.1145/2659651.2659737
Abstract: Worms are becoming a potential threat to Internet users across the globe. The financial damages due to computer worms increased significantly in past few years. Analyzing these hazardous worm attacks has become a crucial issue to be addressed. Given the fact that worm analysts would prefer to analyze classes of worms rather than individual files, their task will be significantly reduced. In this paper, we have proposed a dynamic host-based worm categorization approach to segregate worms. These groups indicate that worm samples constitute different behavior according to their infection and anti-detection vectors. Our proposed approach utilizes system-call traces and computes a distance matrix using Dynamic Time Warping (DTW) algorithm to form these groups. In conjunction to that, the proposed approach also discriminates worm and benign executables. The constructed model is further evaluated with unknown instances of real-world worms.
Keywords: Behavior Monitoring, DTW, System-calls (ID#: 15-5975)
URL:  http://doi.acm.org/10.1145/2659651.2659737

 

Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, Fabio Roli. “Poisoning Behavioral Malware Clustering.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 27-36. doi:10.1145/2666652.2666666
Abstract: Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Keywords: adversarial machine learning, clustering, computer security, malware detection, security evaluation, unsupervised learning (ID#: 15-5976)
URL:  http://doi.acm.org/10.1145/2666652.2666666 

 

Shahid Alam, Ibrahim Sogukpinar, Issa Traore, Yvonne Coady. “In-Cloud Malware Analysis and Detection: State of the Art.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 473. doi:10.1145/2659651.2659730
Abstract: With the advent of Internet of Things, we are facing another wave of malware attacks, that encompass intelligent embedded devices. Because of the limited energy resources, running a complete malware detector on these devices is quite challenging. There is a need to devise new techniques to detect malware on these devices. Malware detection is one of the services that can be provided as an in-cloud service. This paper reviews current such systems, discusses there pros and cons, and recommends an improved in-cloud malware analysis and detection system. We introduce a new three layered hybrid system with a lightweight antimalware engine. These features can provide faster malware detection response time, shield the client from malware and reduce the bandwidth between the client and the cloud, compared to other such systems. The paper serves as a motivation for improving the current and developing new techniques for in-cloud malware analysis and detection system.
Keywords: Cloud computing, In-cloud services, Malware analysis, Malware detection (ID#: 15-5977)
URL: http://doi.acm.org/10.1145/2659651.2659730

 

M. Zubair Rafique, Ping Chen, Christophe Huygens, Wouter Joosen. “Evolutionary Algorithms for Classification of Malware Families Through Different Network Behaviors.” GECCO '14 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 1167-1174. doi:10.1145/2576768.2598238
Abstract: The staggering increase of malware families and their diversity poses a significant threat and creates a compelling need for automatic classification techniques. In this paper, we first analyze the role of network behavior as a powerful technique to automatically classify malware families and their polymorphic variants. Afterwards, we present a framework to efficiently classify malware families by modeling their different network behaviors (such as HTTP, SMTP, UDP, and TCP). We propose protocol-aware and state-space modeling schemes to extract features from malware network behaviors. We analyze the applicability of various evolutionary and non-evolutionary algorithms for our malware family classification framework. To evaluate our framework, we collected a real-world dataset of 6,000 unique and active malware samples belonging to 20 different malware families. We provide a detailed analysis of network behaviors exhibited by these prevalent malware families. The results of our experiments shows that evolutionary algorithms, like sUpervised Classifier System (UCS), can effectively classify malware families through different network behaviors in real-time. To the best of our knowledge, the current work is the first malware classification framework based on evolutionary classifier that uses different network behaviors.
Keywords: machine learning, malware classification, network behaviors (ID#: 15-5978)
URL: http://doi.acm.org/10.1145/2576768.2598238

 

Luke Deshotels, Vivek Notani, Arun Lakhotia. “DroidLegacy: Automated Familial Classification of Android Malware.” PPREW'14 Proceedings of ACM SIGPLAN on Program Protection and Reverse Engineering Workshop, January 2014, Article No. 3. doi:10.1145/2556464.2556467
Abstract: We present an automated method for extracting familial signatures for Android malware, i.e., signatures that identify malware produced by piggybacking potentially different benign applications with the same (or similar) malicious code. The APK classes that constitute malware code in a repackaged application are separated from the benign code and the Android API calls used by the malicious modules are extracted to create a signature. A piggybacked malicious app can be detected by first decomposing it into loosely coupled modules and then matching the Android API calls called by each of the modules against the signatures of the known malware families. Since the signatures are based on Android API calls, they are related to the core malware behavior, and thus are more resilient to obfuscations.  In triage, AV companies need to automatically classify large number of samples so as to optimize assignment of human analysts. They need a system that gives low false negatives even if it is at the cost of higher false positives. Keeping this goal in mind, we fine tuned our system and used standard 10 fold cross validation over a dataset of 1,052 malicious APKs and 48 benign APKs to verify our algorithm. Results show that we have 94% accuracy, 97% precision, and 93% recall when separating benign from malware. We successfully classified our entire malware dataset into 11 families with 98% accuracy, 87% precision, and 94% recall.
Keywords: Android malware, class dependence graphs, familial classification, malware detection, module generation, piggybacked malware, signature generation, static analysis (ID#: 15-5979)
URL:  http://doi.acm.org/10.1145/2556464.2556467

 

Ashish Saini, Ekta Gandotra, Divya Bansal, Sanjeev Sofat. “Classification of PE Files using Static Analysis.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 429. doi:10.1145/2659651.2659679
Abstract: Malware is one of the most terrible and major security threats facing the Internet today. Anti-malware vendors are challenged to identify, classify and counter new malwares due to the obfuscation techniques being used by malware authors. In this paper, we present a simple, fast and scalable method of differentiating malwares from cleanwares on the basis of features extracted from Windows PE files. The features used in this work are Suspicious Section Count and Function Call Frequency. After automatically extracting features of executables, we use machine learning algorithms available in WEKA library to classify them into malwares and cleanwares. Our experimental results provide an accuracy of over 98% for a data set of 3,087 executable files including 2,460 malwares and 627 cleanwares. Based on the results obtained, we conclude that the Function Call Frequency feature derived from the static analysis method plays a significant role in distinguishing malware files from benign ones.
Keywords: Classification, Machine Learning, Static Malware Analysis (ID#: 15-5980)
URL: http://doi.acm.org/10.1145/2659651.2659679

 

Ekta Gandotra, Divya Bansal, Sanjeev Sofat. “Integrated Framework for Classification of Malwares.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 417. doi:10.1145/2659651.2659738
Abstract: Malware is one of the most terrible and major security threats facing the Internet today. It is evolving, becoming more sophisticated and using new ways to target computers and mobile devices. The traditional defences like antivirus softwares typically rely on signature based methods and are unable to detect previously unseen malwares. Machine learning approaches have been adopted to classify malwares based on the features extracted using static or dynamic analysis. Both type of malware analysis have their pros and cons. In this paper, we propose a classification framework which uses integration of both static and dynamic features for distinguishing malwares from clean files. A real world corpus of recent malwares is used to validate the proposed approach. The experimental results, based on a dataset of 998 malwares and 428 cleanware files provide an accuracy of 99.58% indicating that the hybrid approach enhances the accuracy rate of malware detection and classification over the results obtained when these features are considered separately.
Keywords: Classification, Dynamic Analysis, Machine Learning, Malware, Static Analysis (ID#: 15-5981)
URL: http://doi.acm.org/10.1145/2659651.2659738

 

Jing Qiu, Babak Yadegari, Brian Johannesmeyer, Saumya Debray, Xiaohong Su. “A Framework for Understanding Dynamic Anti-Analysis Defenses.” PPREW-4 Proceedings of the 4th Program Protection and Reverse Engineering Workshop, December 2014, Article No. 2. doi:10.1145/2689702.2689704
Abstract:  Malicious code often use a variety of anti-analysis and anti-tampering defenses to hinder analysis. Researchers trying to understand the internal logic of the malware have to penetrate these defenses. Existing research on such anti-analysis defenses tends to study them in isolation, thereby failing to see underlying conceptual similarities between different kinds of anti-analysis defenses. This paper proposes an information-flow-based framework that encompasses a wide variety of anti-analysis defenses. We illustrate the utility of our approach using two different instances of this framework: self-checksumming-based anti-tampering defenses and timing-based emulator detection. Our approach can provide insights into the underlying structure of various anti-analysis defenses and thereby help devise techniques for neutralizing them.
Keywords: Anti-analysis Defense, Self-checksumming, Taint analysis, Timing defense (ID#: 15-5983)
URL: http://doi.acm.org/10.1145/2689702.2689704

 

Mordechai Guri, Gabi Kedma, Buky Carmeli, Yuval Elovici. “Limiting Access to Unintentionally Leaked Sensitive Documents Using Malware Signatures.” SACMAT '14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 129-140. doi:10.1145/2613087.2613103
Abstract: Organizations are repeatedly embarrassed when their sensitive digital documents go public or fall into the hands of adversaries, often as a result of unintentional or inadvertent leakage. Such leakage has been traditionally handled either by preventive means, which are evidently not hermetic, or by punitive measures taken after the main damage has already been done. Yet, the challenge of preventing a leaked file from spreading further among computers and over the Internet is not resolved by existing approaches. This paper presents a novel method, which aims at reducing and limiting the potential damage of a leakage that has already occurred. The main idea is to tag sensitive documents within the organization's boundaries by attaching a benign detectable malware signature (DMS). While the DMS is masked inside the organization, if a tagged document is somehow leaked out of the organization's boundaries, common security services such as Anti-Virus (AV) programs, firewalls or email gateways will detect the file as a real threat and will consequently delete or quarantine it, preventing it from spreading further. This paper discusses various aspects of the DMS, such as signature type and attachment techniques, along with proper design considerations and implementation issues. The proposed method was implemented and successfully tested on various file types including documents, spreadsheets, presentations, images, executable binaries and textual source code. The evaluation results have demonstrated its effectiveness in limiting the spread of leaked documents.
Keywords: anti-virus program, data leakage, detectable malware signature, sensitive document (ID#: 15-5984)
URL: http://doi.acm.org/10.1145/2613087.2613103

 

Mu Zhang, Yue Duan, Heng Yin, Zhiruo Zhao. “Semantics-Aware Android Malware Classification Using Weighted Contextual API Dependency Graphs.” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1105-1116. doi:10.1145/2660267.2660359
Abstract: The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\%) and an acceptable false positive rate (5.15\%) for a vetting purpose.
Keywords: android, anomaly detection, graph similarity, malware classification, semantics-aware, signature detection (ID#: 15-5985)
URL:  http://doi.acm.org/10.1145/2660267.2660359


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Searchable Encryption, 2014

 

 
SoS Logo

Searchable Encryption

2014

 

The phrase “searchable encryption” deals with the problems related to protecting privacy while concurrently allowing for searches within data, particularly in the cloud. The research presented here addresses several approaches. All of the research cited here was presented in 2014.



Florian Hahn, Florian Kerschbaum; “Searchable Encryption with Secure and Efficient Updates,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 310-320. doi:10.1145/2660267.2660297
Abstract: Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient, i.e. have sub-linear search time, dynamic, i.e. allow updates, and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
Keywords: dynamic searchable encryption, searchable encryption, secure index, update (ID#: 15-6102)
URL: http://doi.acm.org/10.1145/2660267.2660297

 

Gabriel Ghinita, Razvan Rughinis; “An Efficient Privacy-Preserving System for Monitoring Mobile Users: Making Searchable Encryption Practical,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 321-332. doi:10.1145/2557547.2557559
Abstract: Monitoring location updates from mobile users has important applications in several areas, ranging from public safety and national security to social networks and advertising. However, sensitive information can be derived from movement patterns, so protecting the privacy of mobile users is a major concern. Users may only be willing to disclose their locations when some condition is met, for instance in proximity of a disaster area, or when an event of interest occurs nearby. Currently, such functionality is achieved using searchable encryption. Such cryptographic primitives provide provable guarantees for privacy, and allow decryption only when the location satisfies some predicate. Nevertheless, they rely on expensive pairing-based cryptography (PBC), and direct application to the domain of location updates leads to impractical solutions.  We propose secure and efficient techniques for private processing of location updates that complement the use of PBC and lead to significant gains in performance by reducing the amount of required pairing operations. We also implement two optimizations that further improve performance: materialization of results to expensive mathematical operations, and parallelization. Extensive experimental results show that the proposed techniques significantly improve performance compared to the baseline, and reduce the searchable encryption overhead to a level that is practical in a computing environment with reasonable resources, such as the cloud.
Keywords: location privacy, pairing-based cryptography (ID#: 15-6103)
URL: http://doi.acm.org/10.1145/2557547.2557559

 

Dalia Khader; “Attribute Based Search in Encrypted Data: ABSE,” WISCS ’14 Proceedings of the 2014 ACM Workshop on Information Sharing & Collaborative Security, November 2014, Pages 31-40. doi:10.1145/2663876.2663878
Abstract: Searchable encryption enables users to delegate search functionalities to third-parties without giving them the ability to decrypt. Existing schemes assume that the sender knows the identity of the receiver. In this paper we relax this assumption by proposing the first Attribute Based Searchable Encryption Scheme (ABSE). An ABSE is a type of public key encryption with keyword search that allows the user encrypting the data to specify a policy that determines, among the users of the system, who is eligible to decrypt and search the data. Each user of the system owns a set of attributes and the policy is a function of these attributes expressed as a predicate. Only members who own sufficient attributes to satisfy that policy can send the server a valid search query. In our work we introduce the concept of a secure ABSE by defining the functionalities and the relevant security notions such as correctness, chosen keyword attacks, and attribute forgeability attacks. Our definitions are based on provable security formalizations. We further propose a secure construction of an ABSE based on bilinear maps. We illustrate the use of our proposed scheme in a shared storage for medical records.
Keywords: attribute based systems, public key cryptography, searchable encryption (ID#: 15-6104)
URL:  http://doi.acm.org/10.1145/2663876.2663878

 

Mehmet Kuzu, Mohammad Saiful Islam, Murat Kantarcioglu; “Efficient Privacy-Aware Search over Encrypted Databases,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 249-256. doi:10.1145/2557547.2557570
Abstract: In recent years, database as a service (DAS) model where data management is outsourced to cloud service providers has become more prevalent. Although DAS model offers lower cost and flexibility, it necessitates the transfer of potentially sensitive data to untrusted cloud servers. To ensure the confidentiality, encryption of sensitive data before its transfer to the cloud emerges as an important option. Encrypted storage provides protection but it complicates data processing including crucial selective record retrieval. To achieve selective retrieval over encrypted collection, considerable amount of searchable encryption schemes have been proposed in the literature with distinct privacy guarantees. Among the available approaches, oblivious RAM based ones offer optimal privacy. However, they are computationally intensive and do not scale well to very large databases. On the other hand, almost all efficient schemes leak some information, especially data access pattern to the remote servers. Unfortunately, recent evidence on access pattern leakage indicates that adversary’s background knowledge could be used to infer the contents of the encrypted data and may potentially endanger individual privacy.  In this paper, we introduce a novel construction for practical and privacy-aware selective record retrieval over encrypted databases. Our approach leaks obfuscated access pattern to enable efficient retrieval while ensuring individual privacy. Applied obfuscation is based on differential privacy which provides rigorous individual privacy guarantees against adversaries with arbitrary background knowledge.
Keywords: differential privacy, searchable encryption, security (ID#: 15-6105)
URL:  http://doi.acm.org/10.1145/2557547.2557570

 

Zhangjie Fu, Jiangang Shu, Xingming Sun, Daxing Zhang; “Semantic Keyword Search Based on Tree over Encrypted Cloud Data,” SCC ’14 Proceedings of the 2nd International Workshop on Security in Cloud Computing, June 2014, Pages 59-62. doi:10.1145/2600075.2600081
Abstract: Searchable encryption is a good solution to search over encrypted cloud data in cloud computing. However, most of existing searchable encryption schemes only support exact keyword search. That means they don’t support searching for different variants of the query word, which is a significant drawback and greatly affects data usability and user experience. In this paper, we formalize the problem of semantic keyword-based search over encrypted cloud data while preserving privacy. Semantic keyword-based search will greatly improves the user experience by returning all the documents containing semantically close keywords related to the query word. In our solution, we use the stemming algorithm to construct stem set, which reduces the dimension of index. And the symbol-based tree is also adopted in index construction to improve the search efficiency. Through rigorous privacy analysis and experiment on real dataset, our scheme is secure and efficient.
Keywords: cloud computing, searchable encryption, semantic search, stemming algorithm (ID#: 15-6106)
URL:  http://doi.acm.org/10.1145/2600075.2600081

 

Boyang Wang, Yantian Hou, Ming Li, Haitao Wang, Hui Li; “Maple: Scalable Multi-Dimensional Range Search over Encrypted Cloud Data with Tree-Based Index,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 111-122.  doi:10.1145/2590296.2590305
Abstract: Cloud computing promises users massive scale outsourced data storage services with much lower costs than traditional methods. However, privacy concerns compel sensitive data to be stored on the cloud server in an encrypted form. This posts a great challenge for effectively utilizing cloud data, such as executing common SQL queries. A variety of searchable encryption techniques have been proposed to solve this issue; yet efficiency and scalability are still the two main obstacles for their adoptions in real-world datasets, which are multi-dimensional in general. In this paper, we propose a tree-based public-key Multi-Dimensional Range Searchable Encryption (MDRSE) to overcome the above limitations. Specifically, we first formally define the leakage function and security of a tree-based MDRSE. Then, by leveraging an existing predicate encryption in a novel way, our tree-based MDRSE efficiently indexes and searches over encrypted cloud data with multi-dimensional tree structures (i.e., R-trees). Moreover, our scheme is able to protect single-dimensional privacy while previous efficient solutions fail to achieve. Our scheme is selectively secure, and through extensive experimental evaluation on a large-scale real-world dataset, we show the efficiency and scalability of our scheme.
Keywords: encrypted cloud data, multiple dimension, range search, tree structures (ID#: 15-6107)
URL: http://doi.acm.org/10.1145/2590296.2590305

 

Florian Kerschbaum; “Client-Controlled Cloud Encryption,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1542-1543. doi:10.1145/2660267.2660577
Abstract: Customers of cloud service demand control over their data. Next to threats to intellectual property, legal requirements and risks, such as data protection compliance or the possibility of a subpoena of the cloud service provider, also pose restrictions. A commonly proposed and implemented solution is to encrypt the data on the client and retain the key at the client. In this tutorial we will review: the available encryption methods, such deterministic, order-preserving, homomorphic, searchable (functional) encryption and secure multi-party computation; possible attacks on currently deployed systems like dictionary and frequency attacks; architectures integrating these solutions into SaaS and PaaS (DBaaS) applications.
Keywords: cloud, encryption, tutorial (ID#: 15-6108)
URL: http://doi.acm.org/10.1145/2660267.2660577

 

David McGrew; “Privacy vs. Efficacy in Cloud-based Threat Detection,” CCSW ’14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 3-4. doi:10.1145/2664168.2664183
Abstract: Advanced threats can be detected by monitoring information systems and networks, then applying advanced analytic techniques to the data thus gathered. It is natural to gather, store, and analyze this data in the Cloud, but doing so introduces significant privacy concerns. There are technologies that can protect privacy to some extent, but these technologies reduce the efficacy of threat analytics and forensics, and introduce computation and communication overhead. This talk considers the tension between privacy and efficacy in Cloud threat detection, and analyzes both pragmatic techniques such as data anonymization via deterministic encryption and differential privacy as well as interactive techniques such as private set intersection and searchable encryption, and highlights areas where further research is needed.
Keywords: cloud, privacy, threat monitoring (ID#: 15-6109)
URL: http://doi.acm.org/10.1145/2664168.2664183

 

Florian Kerschbaum, Axel Schroepfer; “Optimal Average-Complexity Ideal-Security Order-Preserving Encryption,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 275-286. doi:10.1145/2660267.2660277
Abstract: Order-preserving encryption enables performing many classes of queries—including range queries—on encrypted databases. Popa et al. recently presented an ideal-secure order-preserving encryption (or encoding) scheme, but their cost of insertions (encryption) is very high. In this paper we present an also ideal-secure, but significantly more efficient order-preserving encryption scheme. Our scheme is inspired by Reed’s referenced work on the average height of random binary search trees. We show that our scheme improves the average communication complexity from O(n log n) to O(n) under uniform distribution. Our scheme also integrates efficiently with adjustable encryption as used in CryptDB. In our experiments for database inserts we achieve a performance increase of up to 81% in LANs and 95% in WANs.
Keywords: adjustable encryption, efficiency, ideal security, in-memory column database, indistinguishability, order-preserving encryption (ID#: 15-6110)
URL:  http://doi.acm.org/10.1145/2660267.2660277

 

Andreas Schaad, Anis Bkakria, Florian Keschbaum, Frederic Cuppens, Nora Cuppens-Boulahia, David Gross-Amblard; “Optimized and Controlled Provisioning of Encrypted Outsourced Data,” SACMAT ’14 Proceedings of the 19th ACM Symposium on Access Control Models and Technologies, June 2014, Pages 141-152. doi:10.1145/2613087.2613100
Abstract: Recent advances in encrypted outsourced databases support the direct processing of queries on encrypted data. Depending on functionality (i.e. operators) required in the queries the database has to use different encryption schemes with different security properties. Next to these functional requirements a security administrator may have to address security policies that may equally determine the used encryption schemes. We present an algorithm and tool set that determines an optimal balance between security and functionality as well as helps to identify and resolve possible conflicts. We test our solution on a database benchmark and business-driven security policies.
Keywords: encrypted database, encryption algorithm, policy configuration (ID#: 15-6111)
URL: http://doi.acm.org/10.1145/2613087.2613100

 

Yitao Duan; “Distributed Key Generation for Encrypted Deduplication: Achieving the Strongest Privacy,” CCSW ’14 Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 57-68. doi:10.1145/2664168.2664169
Abstract: Large-scale cloud storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent Encryption [27], and evolved into Message Locked Encryption [13] and the latest DupLESS architecture [12], strives to solve this problem. DupLESS relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first introduce a new security notion appropriate for the setting of deduplication and show that it is strictly stronger than all relevant notions. We then provide a rigorous proof of security against this notion, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm. We also introduce a distributed protocol that eliminates the need for the key server. This not only provides better protection but also allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.
Keywords: cloud computing security, deduplication, deterministic encryption (ID#: 15-6112)
URL:  http://doi.acm.org/10.1145/2664168.2664169

 

Warren He, Devdatta Akhawe, Sumeet Jain, Elaine Shi, Dawn Song; “ShadowCrypt: Encrypted Web Applications for Everyone,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1028-1039. doi:10.1145/2660267.2660326
Abstract: A number of recent research and industry proposals discussed using encrypted data in web applications. We first present a systematization of the design space of web applications and highlight the advantages and limitations of current proposals. Next, we present ShadowCrypt, a previously unexplored design point that enables encrypted input/output without trusting any part of the web applications. ShadowCrypt allows users to transparently switch to encrypted input/output for text-based web applications. ShadowCrypt runs as a browser extension, replacing input elements in a page with secure, isolated shadow inputs and encrypted text with secure, isolated cleartext. ShadowCrypt’s key innovation is the use of Shadow DOM, an upcoming primitive that allows low-overhead isolation of DOM trees. Evaluation results indicate that ShadowCrypt has low overhead and of practical use today. Finally, based on our experience with ShadowCrypt, we present a study of 17 popular web applications, across different domains, and the functionality impact and security advantages of encrypting the data they handle.
Keywords: privacy, shadow dom, web security (ID#: 15-6113)
URL:  http://doi.acm.org/10.1145/2660267.2660326

 

Michael Herrmann, Alfredo Rial, Claudia Diaz, Bart Preneel; “Practical Privacy-Preserving Location-Sharing Based Services with Aggregate Statistics,” WiSec ’14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks, July 2014, Pages 87-98. doi:10.1145/2627393.2627414
Abstract: Location-sharing-based services (LSBSs) allow users to share their location with their friends in a sporadic manner. In currently deployed LSBSs users must disclose their location to the service provider in order to share it with their friends. This default disclosure of location data introduces privacy risks. We define the security properties that a privacy-preserving LSBS should fulfill and propose two constructions. First, a construction based on identity based broadcast encryption (IBBE) in which the service provider does not learn the user’s location, but learns which other users are allowed to receive a location update. Second, a construction based on anonymous IBBE in which the service provider does not learn the latter either. As advantages with respect to previous work, in our schemes the LSBS provider does not need to perform any operations to compute the reply to a location data request, but only needs to forward IBBE ciphertexts to the receivers. We implement both constructions and present a performance analysis that shows their practicality. Furthermore, we extend our schemes such that the service provider, performing some verification work, is able to collect privacy-preserving aggregate statistics on the locations users share with each other.
Keywords: broadcast encryption, location privacy, vector commitments (ID#: 15-6114)
URL:  http://doi.acm.org/10.1145/2627393.2627414

 

Aikaterina Latsiou, Panagiotis Rizomiliotis; “The Rainy Season of Cryptography,” PCI ’14 Proceedings of the 18th Panhellenic Conference on Informatics, October 2014, Pages 1-6. doi:10.1145/2645791.2645798
Abstract: Cloud Computing (CC) is the new trend in computing and resource management, an architectural shift towards thin clients and conveniently centralized provision of computing and networking resources. Worldwide cloud services revenue reached 148.8 billion in 2014. However, CC introduces security risks that the clients of the cloud have to deal with. More precisely, there are many security concerns related to outsourcing storage and computation to the cloud and these are mainly attributed to the fact that the clients do not have direct control over the systems that process their data. In this paper, we investigate the new challenges that cryptography faces in the CC era. We introduce a security framework for analysing these challenges, and we describe the cryptographic techniques that have been proposed until now. Finally, we provide a list of open problems and we propose new directions for research.
Keywords: Cloud Computing, Cryptography, Outsourcing (ID#: 15-6115)
URL:  http://doi.acm.org/10.1145/2645791.2645798

 

Hu Chun, Yousef Elmehdwi, Feng Li, Prabir Bhattacharya, Wei Jiang; “Outsourceable Two-Party Privacy-Preserving Biometric Authentication,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 401-412.  doi:10.1145/2590296.2590343
Abstract: Biometric authentication, a key component for many secure protocols and applications, is a process of authenticating a user by matching her biometric data against a biometric database stored at a server managed by an entity. If there is a match, the user can log into her account or obtain the services provided by the entity. Privacy-preserving biometric authentication (PPBA) considers a situation where the biometric data are kept private during the authentication process. That is the user’s biometric data record is never disclosed to the entity, and the data stored in the entity’s biometric database are never disclosed to the user. Due to the reduction in operational costs and high computing power, it is beneficial for an entity to outsource not only its data but also computations such as biometric authentication process to a cloud. However, due to well-documented security risks faced by a cloud, sensitive data like biometrics should be encrypted first and then outsourced to the cloud. When the biometric data are encrypted and cannot be decrypted by the cloud, the existing PPBA protocols are not applicable. Therefore, in this paper, we propose a two-party PPBA protocol when the biometric data in consideration are fully encrypted and outsourced to a cloud. In the proposed protocol, the security of the biometric data is completely protected since the encrypted biometric data are never decrypted during the authentication process. In addition, we formally analyze the security of the proposed protocol and provide extensive empirical results to show its runtime complexity.
Keywords: biometric authentication, cloud computing, security (ID#: 15-6116)
URL: http://doi.acm.org/10.1145/2590296.2590343

 

Hua Deng, Qianhong Wu, Bo Qin, Sherman S.M. Chow, Josep Domingo-Ferrer, Wenchang Shi; “Tracing and Revoking Leaked Credentials: Accountability in Leaking Sensitive Outsourced Data,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 425-434. doi:10.1145/2590296.2590342
Abstract: Most existing proposals for access control over outsourced data mainly aim at guaranteeing that the data are only accessible to authorized requestors who have the access credentials. This paper proposes TRLAC, an a posteriori approach for tracing and revoking leaked credentials, to complement existing a priori solutions. The tracing procedure of TRLAC can trace, in a black-box manner, at least one traitor who illegally distributed a credential, without any help from the cloud service provider. Once the dishonest users have been found, a revocation mechanism can be called to deprive them of access rights. We formally prove the security of TRLAC, and empirically shows that the introduction of the tracing feature incurs little costs to outsourcing.
Keywords: access control, accountability, broadcast encryption, cloud computing, data security, leakage, tracing (ID#: 15-6117)
URL: http://doi.acm.org/10.1145/2590296.2590342

 

Mohammad Saiful Islam, Mehmet Kuzu, Murat Kantarcioglu; “Inference Attack Against Encrypted Range Queries on Outsourced Databases,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, pages 235-246. doi:10.1145/2557547.2557561
Abstract: To mitigate security concerns of outsourced databases, quite a few protocols have been proposed that outsource data in encrypted format and allow encrypted query execution on the server side. Among the more practical protocols, the “bucketization” approach facilitates query execution at the cost of reduced efficiency by allowing some false positives in the query results. Precise Query Protocols (PQPs), on the other hand, enable the server to execute queries without incurring any false positives. Even though these protocols do not reveal the underlying data, they reveal query access pattern to an adversary. In this paper, we introduce a general attack on PQPs based on access pattern disclosure in the context of secure range queries. Our empirical analysis on several real world datasets shows that the proposed attack is able to disclose significant amount of sensitive data with high accuracy provided that the attacker has reasonable amount of background knowledge. We further demonstrate that a slight variation of such an attack can also be used on imprecise protocols (e.g., bucketization) to disclose significant amount of sensitive information.
Keywords: database-as-a-service, encrypted range query, inference attack (ID#: 15-6118)
URL: http://doi.acm.org/10.1145/2557547.2557561

 

Matteo Maffei, Giulio Malavolta, Manuel Reinert, Dominique Schröder; “Brief Announcement: Towards Security and Privacy for Outsourced Data in the Multi-Party Setting,” PODC ’14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 144-146. doi:10.1145/2611462.2611508
Abstract: Cloud storage has rapidly acquired popularity among users, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. This technology, however, puts user data in the direct control of cloud service providers, which raises increasing security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. We present GORAM, a cryptographic system that protects the secrecy and integrity of the data outsourced to an untrusted server and guarantees the anonymity and unlinkability of consecutive accesses to such data. GORAM allows the database owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. Technically, GORAM builds on a combination of ORAM to conceal data accesses, attribute-based encryption to rule the access to outsourced data, and zero-knowledge proofs to prove read and write permissions in a privacy-preserving manner. We implemented GORAM and conducted an experimental evaluation to demonstrate its feasibility.
Keywords: GORAM, ORAM, cloud storage, oblivious ram, privacy-enhancing technologies (ID#: 15-6119)
URL:  http://doi.acm.org/10.1145/2611462.2611508

 

Paul Weiser, Simon Scheider; “A Civilized Cyberspace for Geoprivacy,” GeoPrivacy ’14 Proceedings of the 1st ACM SIGSPATIAL International Workshop on Privacy in Geographic Information Collection and Analysis, November 2014, Article No. 5. doi:10.1145/2675682.2676396
Abstract: We argue that current technical and legal attempts aimed at protecting Geoprivacy are insufficient. We propose a novel 2-dimensional model of privacy, which we term “civilized cyberspace.” On one dimension there are engineering, social and legal tools while on the other there are different kinds of interaction with information. We argue why such a civilized cyberspace protects privacy without sacrificing personal freedom on the one hand and opportunities for businesses on the other. We also discuss its realization and propose a technology stack including a permission service for geoprocessing.
Keywords: geoprivacy, geoprocessing, licensing, privacy model (ID#: 15-6120)
URL:  http://doi.acm.org/10.1145/2675682.2676396

 

Xiao Shaun Wang, Kartik Nayak, Chang Liu, T-H. Hubert Chan, Elaine Shi, Emil Stefanov, Yan Huang; “Oblivious Data Structures,” CCS ’14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 215-226. doi:10.1145/2660267.2660314
Abstract: We design novel, asymptotically more efficient data structures and algorithms for programs whose data access patterns exhibit some degree of predictability. To this end, we propose two novel techniques, a pointer-based technique and a locality-based technique. We show that these two techniques are powerful building blocks in making data structures and algorithms oblivious. Specifically, we apply these techniques to a broad range of commonly used data structures, including maps, sets, priority-queues, stacks, deques; and algorithms, including a memory allocator algorithm, max-flow on graphs with low doubling dimension, and shortest-path distance queries on weighted planar graphs. Our oblivious counterparts of the above outperform the best known ORAM scheme both asymptotically and in practice.
Keywords: cryptography, oblivious algorithms, security (ID#: 15-6121)
URL: http://doi.acm.org/10.1145/2660267.2660314

 

Jinsheng Zhang, Wensheng Zhang, Daji Qiao; “S-ORAM: a Segmentation-based Oblivious RAM,” ASIA CCS ’14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 147-158. doi:10.1145/2590296.2590323
Abstract: As outsourcing data to remote storage servers gets popular, protecting user’s pattern in accessing these data has become a big concern. ORAM constructions are promising solutions to this issue, but their application in practice has been impeded by the high communication and storage overheads incurred. Towards addressing this challenge, this paper proposes a segmentation-based ORAM (S-ORAM). It adopts two segment-based techniques, namely, piece-wise shuffling and segment-based query, to improve the performance of shuffling and query by factoring block size into design. Extensive security analysis proves that S-ORAM is a highly secure solution with a negligible failure probability of O(N-log N). In terms of communication and storage overheads, S-ORAM outperforms the Balanced ORAM (B-ORAM) and the Path ORAM (P-ORAM), which are the state-of-the-art hash and index based ORAMs respectively, in both practical and theoretical evaluations. Particularly under practical settings, the communication overhead of S-ORAM is 12 to 23 times less than B-ORAM when they have the same constant-size user-side storage, and S-ORAM consumes 80% less server-side storage and around 60% to 72% less bandwidth than P-ORAM when they have the similar logarithmic-size user-side storage.
Keywords: access pattern, data outsourcing, oblivious RAM, privacy (ID#: 15-6122)
URL:  http://doi.acm.org/10.1145/2590296.2590323

 

Loi Luu, Shweta Shinde, Prateek Saxena, Brian Demsky; “A Model Counter for Constraints over Unbounded Strings,” PLDI ’14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 565-576. doi:10.1145/2594291.2594331
Abstract: Model counting is the problem of determining the number of solutions that satisfy a given set of constraints. Model counting has numerous applications in the quantitative analyses of program execution time, information flow, combinatorial circuit designs as well as probabilistic reasoning. We present a new approach to model counting for structured data types, specifically strings in this work. The key ingredient is a new technique that leverages generating functions as a basic primitive for combinatorial counting. Our tool SMC which embodies this approach can model count for constraints specified in an expressive string language efficiently and precisely, thereby outperforming previous finite-size analysis tools. SMC is expressive enough to model constraints arising in real-world JavaScript applications and UNIX C utilities. We demonstrate the practical feasibility of performing quantitative analyses arising in security applications, such as determining the comparative strengths of password strength meters and determining the information leakage via side channels.
Keywords: (not provided) (ID#: 15-6123)
URL:  http://doi.acm.org/10.1145/2666356.2594331

 

Suman Phangal, Mukesh Kumar; “A Dual Security Scheme Using DNA Key-Based DNA Cryptography,” ICTCS ’14 Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies, November 2014, Article No. 37. doi:10.1145/2677855.2677882
Abstract: Cryptography is one of the most traditional and secure approach to provide reliable transmission over the web. The presented work is the improvement over the traditional symmetric cryptography approach by including the concept of DNA Sequencing. In this work, a two stage model is presented to improve the DNA Cryptography. This cryptography model uses the DNA Sequence as the Input Key to the system as well as uses the DNA object based substitution for cryptography. The work is applied on Images. The analysis of work is done under MSE and PSNR values. The obtained result shows the effective generation of cryptography image.
Keywords: Cryptography, DNA, MSE, PSNR, Secure (ID#: 15-6124)
URL: http://doi.acm.org/10.1145/2677855.2677882

 

Hamidreza Ghafghazi, Amr El Mougy, Hussein T. Mouftah, Carlisle Adams; “Classification of Technological Privacy Techniques for LTE-Based Public Safety Networks,” Q2SWinet ’14 Proceedings of the 10th ACM symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 41-50. doi:10.1145/2642687.2642693
Abstract: Public Protection and Disaster Relief (PPDR) organizations emphasize the need for dedicated and broadband Public Safety Networks (PSNs) with the capability of providing a high level of security for critical communications. Considering the preceding fact, Long Term Evolution (LTE) has been chosen as the leading candidate technology for PSNs. However, a study of privacy challenges and requirements in LTE-based PSNs has not yet emerged. This paper aims to highlight those challenges and further discusses possible scenarios in which privacy might be violated in this particular environment. Then, a classification of technological privacy techniques is proposed in order to protect and enhance privacy in LTE-based PSNs. The given classification is a useful means for comparison and assessment of applicable privacy preserving methods. Moreover, our classification highlights further requirements and open problems for which available privacy techniques are not sufficient.
Keywords: long term evolution, privacy, private information retrieval, public safety networks (ID#: 15-6125)
URL:  http://doi.acm.org/10.1145/2642687.2642693

 

Se Eun Oh, Ji Young Chun, Limin Jia, Deepak Garg, Carl A. Gunter, Anupam Datta; “Privacy-Preserving Audit for Broker-Based Health Information Exchange,” CODASPY ’14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 313-320.  doi:10.1145/2557547.2557576
Abstract: Developments in health information technology have encouraged the establishment of distributed systems known as Health Information Exchanges (HIEs) to enable the sharing of patient records between institutions. In many cases, the parties running these exchanges wish to limit the amount of information they are responsible for holding because of sensitivities about patient information. Hence, there is an interest in broker-based HIEs that keep limited information in the exchange repositories. However, it is essential to audit these exchanges carefully due to risks of inappropriate data sharing. In this paper, we consider some of the requirements and present a design for auditing broker-based HIEs in a way that controls the information available in audit logs and regulates their release for investigations. Our approach is based on formal rules for audit and the use of Hierarchical Identity-Based Encryption (HIBE) to support staged release of data needed in audits and a balance between automated and manual reviews. We test our methodology via an extension of a standard for auditing HIEs called the Audit Trail and Node Authentication Profile (ATNA) protocol.
Keywords: audit, formal logic, health information technology, hierarchical identity based encryption (ID#: 15-6126)
URL: http://doi.acm.org/10.1145/2557547.2557576

 

David Koll, Jun Li, Xiaoming Fu; “SOUP: An Online Social Network by the People, for the People,” Middleware ’14 Proceedings of the 15th International Middleware Conference, December 2014, Pages 193-204. doi:10.1145/2663165.2663324
Abstract: Concomitant with the tremendous growth of online social networking (OSN) platforms are increasing concerns from users about their privacy and the protection of their data. As user data management is usually centralized, OSN providers nowadays have the unprecedented privilege to access every user’s private data, which makes large-scale privacy leakage at a single site possible. One way to address this issue is to decentralize user data management and replicate user data at individual end-user machines across the OSN. However, such an approach must address new challenges. In particular, it must achieve high availability of the data of every user with minimal replication overhead and without assuming any permanent online storage. At the same time, it needs to provide mechanisms for encrypting user data, controlling access to the data, and synchronizing the replicas. Moreover, it has to scale with large social networks and be resilient and adaptive in handling both high churn of regular participants and attacks from malicious users. While recent works in this direction only show limited success, we introduce a new, decentralized OSN called the Self-Organized Universe of People (SOUP). SOUP employs a scalable, robust and secure mirror selection design and can effectively distribute and manage encrypted user data replicas throughout the OSN. An extensive evaluation by simulation and a real-world deployment show that SOUP addresses all aforementioned challenges.
Keywords: OSN, decentralized OSN, online social networks, privacy (ID#: 15-6127)
URL:  http://doi.acm.org/10.1145/2663165.2663324

 

Jude C. Nelson, Larry L. Peterson; “Syndicate: Virtual Cloud Storage Through Provider Composition,” BigSystem ’14 Proceedings of the 2014 ACM International Workshop on Software-Defined Ecosystems, June 2014, Pages 1-8. doi:10.1145/2609441.2609639
Abstract: Syndicate is a storage service that builds a coherent storage abstraction from already-deployed commodity components, including cloud storage, edge caches, and dataset providers. It is unique in that it not only offers consistent semantics across multiple providers, but also offers a flexible programming model to applications so they can define their own provider-agnostic storage functionality. In doing so, Syndicate fully decouples applications from providers, allowing applications to choose them based on how well they enhance data locality and durability, instead of whether or not they provide requisite features. This paper presents the motivation and design of Syndicate, and gives the results of a preliminary evaluation showing that separating storage functionality from provider implementation is feasible in practice.
Keywords: service composition, software-defined storage, storage gateway (ID#: 15-6128)
URL:  http://doi.acm.org/10.1145/2609441.2609639

 

Varunya Attasena, Nouria Harbi, Jérôme Darmont; “fVSS: A New Secure and Cost-Efficient Scheme for Cloud Data Warehouses,” DOLAP ’14 Proceedings of the 17th International Workshop on Data Warehousing and OLAP, November 2014, Pages 81-90. doi:10.1145/2666158.2666173
Abstract: Cloud business intelligence is an increasingly popular choice to deliver decision support capabilities via elastic, pay-per-use resources. However, data security issues are one of the top concerns when dealing with sensitive data. In this paper, we propose a novel approach for securing cloud data warehouses by flexible verifiable secret sharing, fVSS. Secret sharing encrypts and distributes data over several cloud service providers, thus enforcing data privacy and availability. fVSS addresses four shortcomings in existing secret sharing-based approaches. First, it allows refreshing the data warehouse when some service providers fail. Second, it allows on-line analysis processing. Third, it enforces data integrity with the help of both inner and outer signatures. Fourth, it helps users control the cost of cloud warehousing by balancing the load among service providers with respect to their pricing policies. To illustrate fVSS’ efficiency, we thoroughly compare it with existing secret sharing-based approaches with respect to security features, querying power and data storage and computing costs.
Keywords: OLAP, cloud computing, data availability, data integrity, data privacy, data warehouses, secret sharing (ID#: 15-6129)
URL:  http://doi.acm.org/10.1145/2666158.2666173

 

Tomäš Pevný, Andrew D. Ker; “Steganographic Key Leakage Through Payload Metadata,” IH&MMSec ’14 Proceedings of the 2nd ACM Workshop on Information Hiding and Multimedia Security, June 2014, Pages 109-114. doi:10.1145/2600918.2600921
Abstract: The only steganalysis attack which can provide absolute certainty about the presence of payload is one which finds the embedding key. In this paper we consider refined versions of the key exhaustion attack exploiting metadata such as message length or decoding matrix size, which must be stored along with the payload. We show simple errors of implementation lead to leakage of key information and powerful inference attacks; furthermore, complete absence of information leakage seems difficult to avoid. This topic has been somewhat neglected in the literature for the last ten years, but must be considered in real-world implementations.
Keywords: bayesian inference, brute-force attack, key leakage, steganographic security (ID#: 15-6130)
URL:  http://doi.acm.org/10.1145/2600918.2600921

 

Greig Paul, James Irvine; “Privacy Implications of Wearable Health Devices,” SIN ’14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Page 117. doi:10.1145/2659651.2659683
Abstract: With the recent rise in popularity of wearable personal health monitoring devices, a number of concerns regarding user privacy are raised, specifically with regard to how the providers of these devices make use of the data obtained from these devices, and the protections that user data enjoys. With waterproof monitors intended to be worn 24 hours per day, and companion smartphone applications able to offer analysis and sharing of activity data, we investigate and compare the privacy policies of four services, and the extent to which these services protect user privacy, as we find these services do not fall within the scope of existing legislation regarding the privacy of health data. We then present a set of criteria which would preserve user privacy, and avoid the concerns identified within the policies of the services investigated.
Keywords: Health monitoring, privacy, security, wearables (ID#: 15-6131)
URL: http://doi.acm.org/10.1145/2659651.2659683


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

 

Security Measurement and Metric Methods, 2014

 

 
SoS Logo

Security Measurement and Metric Methods, 2014


Measurement and metrics are hard problems in the Science of Security. The research cited here looks at methods and techniques of developing valid measurement. This work was presented in 2014.
 


Moeti, M.; Kalema, B.M., "Analytical Hierarchy Process Approach for the Metrics of Information Security Management Framework," Computational Intelligence, Communication Systems and Networks (CICSyN), 2014 Sixth International Conference on, vol., no., pp. 89, 94, 27-29 May 2014. doi:10.1109/CICSyN.2014.31
Abstract: Organizations' information technology systems are increasingly being attacked and exposed to risks that lead to loss of valuable information and money. The systems and applications of vulnerability are basically, networks, databases, web services, internet-based services and communications, mobile technologies and people issues associated with them. The major objective of this study therefore, was to identify metrics needed for the development of an information security management framework. From related literature, relevant metrics were identified using textual analysis and grouped into six categories of, organizational, environmental, contingency management, security policy, internal control, and information and risk management. These metrics were validated in a framework by using the analytical hierarchical process (AHP) method. Results of the study indicated that, environmental metrics play a critical role in the information security management as compared to other metrics whereas the information and risk management metrics was found to be not so significant during the rankings. This study contributes to the information security management body of knowledge by providing a single empirically validated framework that will be used theoretically to extend research in the domain of the study and practically by management while making decisions relating to security management.
Keywords: Internet; analytic hierarchy process; risk management; security of data; AHP; Internet-based services; Web services; analytical hierarchy process approach; databases; information security management framework metrics; mobile technologies; organizations information technology systems; risk management metrics; security management; Contingency management; Educational institutions; Information security; Measurement; Organizations; Risk management; analytical hierarchical process; information security metrics; integrated system theory; theories of information security (ID#: 15-6060)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7059150&isnumber=7058962

 

Manandhar, K.; Xiaojun Cao; Fei Hu; Yao Liu, "Combating False Data Injection Attacks in Smart Grid Using Kalman Filter," Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 16, 20, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785297
Abstract: The security of Smart Grid, being one of the very important aspects of the Smart Grid system, is studied in this paper. We first discuss different pitfalls in the security of the Smart Grid system considering the communication infrastructure among the sensors, actuators, and control systems. Following that, we derive a mathematical model of the system and propose a robust security framework for power grid. To effectively estimate the variables of a wide range of state processes in the model, we adopt Kalman Filter in the framework. The Kalman Filter estimates and system readings are then fed into the χ2-square detectors and the proposed Euclidean detectors, which can detect various attacks and faults in the power system including False Data Injection Attacks. The χ2-detector is a proven-effective exploratory method used with Kalman Filter for the measurement of the relationship between dependent variables and a series of predictor variables. The χ2-detector can detect system faults/attacks such as replay and DoS attacks. However, the study shows that the X2-detector detectors are unable to detect statistically derived False Data Injection Attacks while the Euclidean distance metrics can identify such sophisticated injection attacks.
Keywords: Kalman filters; computer network security; electric sensing devices; fault diagnosis; power engineering computing; power system faults; power system security; power system state estimation; smart power grids; X2-square detector; DoS attacks; Euclidean detector; Euclidean distance metrics; Kalman filter; actuators; communication infrastructure; control systems; false data injection attack detection; fault detection; mathematical model; power system; predictor variable series; sensors; smart power grid security; state process; Detectors; Equations; Kalman filters; Mathematical model; Security; Smart grids (ID#: 15-6061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785297&isnumber=6785290

 

Karabat, C.; Topcu, B., "How to Assess Privacy Preservation Capability of Biohashing Methods?: Privacy Metrics," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, vol., no., pp. 2217, 2220, 23-25 April 2014. doi:10.1109/SIU.2014.6830705
Abstract: In this paper, we evaluate privacy preservation capability of biometric hashing methods. Although there are some work on privacy evaluation of biometric template protection methods in the literature, they fail to cover all biometric template protection methods. To the best of our knowledge, there is no work on privacy metrics and assessment for biometric hashing methods. We use several metrics under different threat scenarios to assess privacy protection level of biometric hashing methods in this work. The simulation results demonstrate that biometric hash vectors may leak private information especially under advanced threat scenarios.
Keywords: authorisation; biometrics (access control); data protection; biometric hash vectors; biometric hashing methods; biometric template protection methods; privacy metrics; privacy preservation capability assessment; privacy preservation capability evaluation; privacy protection level assessment; private information leakage; threat scenarios; Conferences; Internet; Measurement; Privacy; Security; Signal processing; Simulation; biometric; biometric hash; metrics; privacy (ID#: 15-6062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830705&isnumber=6830164

 

Hong, J.B.; Dong Seong Kim; Haqiq, A., "What Vulnerability Do We Need to Patch First?," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, vol., no., pp. 684, 689, 23-26 June 2014. doi:10.1109/DSN.2014.68
Abstract: Computing a prioritized set of vulnerabilities to patch is important for system administrators to determine the order of vulnerabilities to be patched that are more critical to the network security. One way to assess and analyze security to find vulnerabilities to be patched is to use attack representation models (ARMs). However, security solutions using ARMs are optimized for only the current state of the networked system. Therefore, the ARM must reanalyze the network security, causing multiple iterations of the same task to obtain the prioritized set of vulnerabilities to patch. To address this problem, we propose to use importance measures to rank network hosts and vulnerabilities, then combine these measures to prioritize the order of vulnerabilities to be patched. We show that nearly equivalent prioritized set of vulnerabilities can be computed in comparison to an exhaustive search method in various network scenarios, while the performance of computing the set is dramatically improved, while equivalent solutions are computed in various network scenarios.
Keywords: security of data; ARM; attack representation models; importance measures; network hosts; network security; networked system; prioritized set; security solutions; system administrators; vulnerability patch; Analytical models; Computational modeling; Equations; Mathematical model; Measurement; Scalability; Security; Attack Representation Model; Network Centrality; Security Analysis; Security Management; Security Metrics; Vulnerability Patch (ID#: 15-6063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903625&isnumber=6903544

 

Nascimento, Z.; Sadok, D.; Fernandes, S.; Kelner, J., "Multi-Objective Optimization of a Hybrid Model for Network Traffic Classification by Combining Machine Learning Techniques," Neural Networks (IJCNN), 2014 International Joint Conference on, vol., no., pp. 2116, 2122, 6-11 July 2014. doi:10.1109/IJCNN.2014.6889935
Abstract: Considerable effort has been made by researchers in the area of network traffic classification, since the Internet is constantly changing. This characteristic makes the task of traffic identification not a straightforward process. Besides that, encrypted data is being widely used by applications and protocols. There are several methods for classifying network traffic such as known ports and Deep Packet Inspection (DPI), but they are not effective since many applications constantly randomize their ports and the payload could be encrypted. This paper proposes a hybrid model that makes use of a classifier based on computational intelligence, the Extreme Learning Machine (ELM), along with Feature Selection (FS) and Multi-objective Genetic Algorithms (MOGA) to classify computer network traffic without making use of the payload or port information. The proposed model presented good results when evaluated against the UNIBS data set, using four performance metrics: Recall, Precision, Flow Accuracy and Byte Accuracy, with most rates exceeding 90%. Besides that, presented the best features and feature selection algorithm for the given problem along with the best ELM parameters.
Keywords: Internet; computer network security; cryptography; feature selection; genetic algorithms; learning (artificial intelligence); pattern classification; protocols; telecommunication traffic; DPI; ELM parameters; Internet; MOGA; UNIBS data set; byte accuracy; computational intelligence; computer network traffic classification; deep packet inspection; encrypted data; extreme learning machine; feature selection algorithm; flow accuracy; hybrid model; machine learning techniques; multiobjective genetic algorithms; multiobjective optimization; payload encryption; precision; protocols; recall; traffic identification; Accuracy; Computational modeling; Genetic algorithms; Measurement; Optimization; Ports (Computers); Protocols (ID#: 15-6064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889935&isnumber=6889358

 

Hatzivasilis, G.; Papaefstathiou, I.; Manifavas, C.; Papadakis, N., "A Reasoning System for Composition Verification and Security Validation,"  New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp. 1, 4, March 30 2014-April 2 2014.  doi:10.1109/NTMS.2014.6814001
Abstract: The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language.
Keywords: Java; embedded systems; formal specification; formal verification; reasoning about programs; security of data; software metrics; temporal logic; DECKT; JAVA programming language; composition validation; composition verification; dynamic embedded system composition; epistemic reasoner; event calculus; formal methods; model-based framework; performance metrics; reasoning system; rule engine JESS; security evaluation; security validation; system specification; system-of-systems; Cognition; Computational modeling; Embedded systems; Measurement; Protocols; Security; Unified modeling language (ID#: 15-6064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963

 

Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol., no., pp. 1, 6, 2-2 May 2014. doi:10.1109/LISAT.2014.6845212
Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-of-breed software assurance methods to prevent the vulnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC; US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-6066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183

 

Chulhee Lee; Jiheon Ok; Guiwon Seo, "Objective Video Quality Measurement Using Embedded VQMs," Heterogeneous Networking for Quality, Reliability, Security and Robustness (QShine), 2014 10th International Conference on, vol., no., pp. 129 ,130, 18-20 Aug. 2014. doi:10.1109/QSHINE.2014.6928671
Abstract: Video quality monitoring has become an important issue as multimedia data is increasingly being transmitted over the Internet and wireless channels where transmission errors can frequently occur. Although no-reference models are suitable to such applications, current no-reference methods do not provide acceptable performance. In this paper, we propose an objective video quality assessment method using embedded video quality metrics (VQMs). In the proposed method, the video quality of encoded video data is computed at the transmitter during the encoding process. The computed VQMs are embedded in the compressed data. If there are no transmission errors, the video quality at the receiver would be identical with that of the transmitting side. If there are transmission errors, the receiver adjusts the embedded VQMs by taking into account the effects of transmission errors. The proposed method is fast and provides good performance.
Keywords: data compression; video coding; embedded VQM; embedded video quality metric; multimedia data; objective video quality measurement; video data encoding; video quality monitoring; Bit rate; Neural networks; Quality assessment; Receivers; Transmitters; Video recording; Video sequences; embedded VQM; no-reference; quality monitoring; video quality assessment (ID#: 15-6067)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928671&isnumber=6928645

 

Shangdong Liu; Jian Gong; Jianxin Chen; Yanbing Peng; Wang Yang; Weiwei Zhang; Jakalan, A., "A Flow Based Method to Detect Penetration," Advanced Infocomm Technology (ICAIT), 2014 IEEE 7th International Conference on, vol., no.,  pp. 184, 191, 14-16 Nov. 2014. doi:10.1109/ICAIT.2014.7019551
Abstract: With the rapid expansion of the Internet, network security has become more and more important. IDS (Intrusion Detection System) is an important technology coping network attacks and is of two main types: network based (NIDS) and host based (HIDS). In this paper, we propose the conception of NFPPB (Network Flow Patterns of Penetrating Behavior) to network vulnerable ports and design a NIDS algorithm to detect infiltration behaviors of attacker. Essentially, NFPPB is a set of metrics calculated by network activities exploiting the vulnerabilities of hosts. The paper investigates choosing, generation and comparison of NFPPB metrics. Experiments show that the method is effective and with high efficiency. At last, the paper addresses the future direction and the points that need to be improved.
Keywords: computer network security; IDS; flow based method; intrusion detection system; network attacks; network flow patterns of penetrating behavior; network security; network vulnerable ports; Educational institutions; IP networks; Law; Measurement; Ports (Computers); Security; Flow Records; IDS; Infiltration Detection; Penetration Detection (ID#: 15-6068)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019551&isnumber=7019521

 

Feng Li; Chin-Tser Huang; Jie Huang; Wei Peng, "Feedback-Based Smartphone Strategic Sampling for BYOD Security," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on , vol., no., pp.1, 8, 4-7 Aug. 2014. doi:10.1109/ICCCN.2014.6911814
Abstract: Bring Your Own Device (BYOD) is an information technology (IT) policy that allows employees to use their own wireless devices to access internal network at work. Mobile malware is a major security concern that impedes BYOD's further adoption in enterprises. Existing works identify the need for better BYOD security mechanisms that balance between the strength of such mechanisms and the costs of implementing such mechanisms. In this paper, based on the idea of self-reinforced feedback loop, we propose a periodic smartphone sampling mechanism that significantly improve BYOD security mechanism's effectiveness without incurring further costs. We quantify the likelihood that “a BYOD smartphone is infected by malware” by two metrics, vulnerability and uncertainty, and base the iterative sampling process on these two metrics; the updated values of these metrics are fed back into future rounds of the mechanism to complete the feedback loop. We validate the efficiency and effectiveness of the proposed strategic sampling via simulations driven by publicly available, real-world collected traces.
Keywords: invasive software; iterative methods; mobile computing; sampling methods; smart phones; telecommunication security; BYOD security; BYOD smartphone; Bring Your Own Device; IT policy; feedback-based smartphone strategic sampling; information technology; iterative sampling process; mobile malware; periodic smartphone sampling mechanism; self-reinforced feedback loop; wireless device; Feedback loop; Malware; Measurement; Topology; Uncertainty; Wireless communication; Enterprise network; probabilistic algorithm; smartphone security; social network; strategic sampling; uncertainty metric; vulnerability metric (ID#: 15-6069)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911814&isnumber=6911704

 

Vaarandi, R.; Pihelgas, M., "Using Security Logs for Collecting and Reporting Technical Security Metrics," Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 294, 299, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.53
Abstract: During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Keywords: Big Data; computer network security; big data; log analysis methods; log analysis techniques; open source technology; security logs; technical security metric collection; technical security metric reporting; Correlation; Internet; Measurement; Monitoring; Peer-to-peer computing; Security; Workstations; security log analysis; security metrics (ID#: 15-6070)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956774&isnumber=6956719

 

Scholtz, J.; Endert, A., "User-Centered Design Guidelines for Collaborative Software for Intelligence Analysis," Collaboration Technologies and Systems (CTS), 2014 International Conference on, vol., no., pp. 478, 482, 19-23 May 2014. doi:10.1109/CTS.2014.6867610
Abstract: In this position paper we discuss the necessity of using User-Centered Design (UCD) methods in order to design collaborative software for the intelligence community. We discuss a number of studies of collaboration in the intelligence community and use this information to provide some guidelines for collaboration software.
Keywords: groupware; police data processing; user centred design; UCD methods; collaborative software; intelligence analysis; intelligence community; user-centered design guidelines; Collaborative software; Communities; Guidelines; Integrated circuits; Measurement; Software; Intelligence community; collaboration; evaluation; metrics; user-centered design (ID#: 15-6071)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867610&isnumber=6867522

 

Keramati, Marjan; Keramati, Mahsa, "Novel Security Metrics for Ranking Vulnerabilities in Computer Networks," Telecommunications (IST), 2014 7th International Symposium on, vol., no., pp. 883, 888, 9-11 Sept. 2014. doi:10.1109/ISTEL.2014.7000828
Abstract: By daily increasing appearance of vulnerabilities and various ways of intruding networks, one of the most important fields in network security will be doing network hardening and this can be possible by patching the vulnerabilities. But this action for all vulnerabilities may cause high cost in the network and so, we should try to eliminate only most perilous vulnerabilities of the network. CVSS itself can score vulnerabilities based on amount of damage they incur in the network but the main problem with CVSS is that, it can only score individual vulnerabilities without considering its relationship with other vulnerabilities of the network. So, in order to help fill this gap, in this paper we have defined some Attack graph and CVSS-based security metrics that can help us to prioritize vulnerabilities in the network by measuring the probability of exploiting them and also the amount of damage they will impose on the network. Proposed security metrics are defined by considering interaction between all vulnerabilities of the network. So our method can rank vulnerabilities based on the network they exist in. Results of applying these security metrics on one well-known network example are also shown that can demonstrates effectiveness of our approach.
Keywords: computer network security; matrix algebra; probability; CVSS-based security metrics; common vulnerability scoring system; computer network; intruding network security; probability; ranking vulnerability; Availability; Communication networks; Complexity theory; Computer networks; Educational institutions; Measurement; Security; Attack Graph; CVSS; Exploit; Network hardening; Security Metric; Vulnerability (ID#: 15-6072)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000828&isnumber=7000650

 

Samuvelraj, G.; Nalini, N., "A Survey of Self Organizing Trust Method to Avoid Malicious Peers from Peer to Peer Network," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, vol., no., pp. 1, 4, 6-8 March 2014. doi:10.1109/ICGCCEE.2014.6921379
Abstract: Networks are subject to attacks from malicious sources. Sending the data securely over the network is one of the most tedious processes. A peer-to-peer (P2P) network is a type of decentralized and distributed network architecture in which individual nodes in the network act as both servers and clients of resources. Peer to peer systems are incredibly flexible and can be used for wide range of functions and also a Peer to peer (P2P) system prone to malicious attacks. To provide a security over peer to peer system the self-organizing trust model has been proposed. Here the trustworthiness of the peers has been calculated based on past interactions and recommendations. The interactions and recommendations are evaluated based on importance, recentness, and satisfaction parameters. By this the good peers were able to form trust relationship in their proximity and avoids the malicious peers.
Keywords: client-server systems; computer network security; fault tolerant computing; peer-to-peer computing; recommender systems; trusted computing; P2P network; client-server resources; decentralized network architecture; distributed network architecture; malicious attacks; malicious peers; malicious sources; peer to peer network; peer to peer systems; peer trustworthiness; satisfaction parameters;self organizing trust method; self-organizing trust model; Computer science; History; Measurement; Organizing; Peer-to-peer computing; Security; Servers; Metrics; Network Security; Peer to Peer; SORT (ID#: 15-6073)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921379&isnumber=6920919

 

Desouky, A.F.; Beard, M.D.; Etzkorn, L.H., "A Qualitative Analysis of Code Clones and Object Oriented Runtime Complexity Based on Method Access Points," Convergence of Technology (I2CT), 2014 International Conference for, vol., no., pp. 1, 5, 6-8 April 2014. doi:10.1109/I2CT.2014.7092292
Abstract: In this paper, we present a new object oriented complexity metric based on runtime method access points. Software engineering metrics have traditionally indicated the level of quality present in a software system. However, the analysis and measurement of quality has long been captured at compile time, rendering useful results, although potentially incomplete, since all source code is considered in metric computation, versus the subset of code that actually executes. In this study, we examine the runtime behavior of our proposed metric on an open source software package, Rhino 1.7R4. We compute and validate our metric by correlating it with code clones and bug data. Code clones are considered to make software more complex and harder to maintain. When cloned, a code fragment with an error quickly transforms into two (or more) errors, both of which can affect the software system in unique ways. Thus a larger number of code clones is generally considered to indicate poorer software quality. For this reason, we consider that clones function as an external quality factor, in addition to bugs, for metric validation.
Keywords: object-oriented programming; program verification; public domain software; security of data; software metrics; software quality; source code (software); Rhino 1.7R4; bug data; code clones; metric computation; metric validation; object oriented runtime complexity; open source software package; qualitative analysis; runtime method access points; software engineering metrics; source code; Cloning; Complexity theory; Computer bugs; Correlation; Measurement; Runtime; Software; Code Clones; Complexity; Object Behavior; Object Oriented Runtime Metrics; Software Engineering (ID#: 15-6074)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092292&isnumber=7092013

 

Snigurov, A.; Chakrian, V., "The DoS Attack Risk Calculation Based on the Entropy Method and Critical System Resources Usage," Infocommunications Science and Technology, 2014 First International Scientific-Practical Conference Problems of, vol., no., pp. 186, 187, 14-17 Oct. 2014. doi:10.1109/INFOCOMMST.2014.6992346
Abstract: The paper is focused on algorithm of denial of service risk calculation using the entropy method and considering the additional coefficients of critical system resource usage on the network node. Further the decisions of traffic routing or prevention of attack can be chosen based on the level of risk.
Keywords: computer network security; telecommunication traffic; DoS attack risk calculation; critical system resource usage; denial of service risk calculation; entropy method; traffic routing; Computer crime; Entropy; Information security; Random access memory; Routing; Sockets; Time measurement; DoS attack; entropy; information security risk; routing metrics (ID#: 15-6075)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992346&isnumber=6992271

 

Zabasta, A.; Casaliccio, E.; Kunicina, N.; Ribickis, L., "A Numerical Model for Evaluation Power Outages Impact on Water Infrastructure Services Sustainability," Power Electronics and Applications (EPE'14-ECCE Europe), 2014 16th European Conference on, vol., no., pp. 1, 10, 26-28 Aug. 2014. doi:10.1109/EPE.2014.6910703
Abstract: Critical infrastructure's (CI) (electricity, heat, water, information and communication technology networks) security, stability and reliability are closely related to the interaction phenomenon. Due to the increasing amount of data transferred, increases dependence on telecommunications and internet services, the data integrity and security is becoming a very important aspect for the utility services providers and energy suppliers. In such circumstances, the need is increasing for methods and tools that enable infrastructure managers to evaluate and predict their critical infrastructure operations as the failures, emergency or service degradation occur in other related infrastructures. Using a simulation model, is experimentally tested a method that allows to explore the water supply network nodes the average down time dependence on the battery life time and the battery replacement time cross-correlations, within the parameters set, when outages in power infrastructure arise and taking into account also the impact of telecommunication nodes. The model studies the real case of Latvian city Ventspils. The proposed approach for the analysis of critical infrastructures interdependencies will be useful for practical adoption of methods, models and metrics for CI operators and stakeholders.
Keywords: critical infrastructures; polynomial approximation; power system reliability; power system security; power system stability; water supply; CI operators; average down time dependence; battery life time; battery replacement time cross-correlations; critical infrastructure operations; critical infrastructure security; critical infrastructures interdependencies; data integrity; data security; energy suppliers; infrastructure managers; interaction phenomenon; internet services; power infrastructure outages; stakeholders; telecommunication nodes; utility services providers; water supply network nodes; Analytical models; Batteries; Mathematical model; Measurement; Power supplies; Telecommunications; Unified modeling language; Estimation technique; Fault tolerance; Modelling; Simulation (ID#: 15-6076)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910703&isnumber=6910682

 

Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., "Outmet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-6077)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725

 

Cain, A.A.; Schuster, D., "Measurement of Situation Awareness Among Diverse Agents in Cyber Security," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on, vol., no., pp. 124, 129, 3-6 March 2014. doi:10.1109/CogSIMA.2014.6816551
Abstract: Development of innovative algorithms, metrics, visualizations, and other forms of automation are needed to enable network analysts to build situation awareness (SA) from large amounts of dynamic, distributed, and interacting data in cyber security. Several models of cyber SA can be classified as taking an individual or a distributed approach to modeling SA within a computer network. While these models suggest ways to integrate the SA contributed by multiple actors, implementing more advanced data center automation will require consideration of the differences and similarities between human teaming and human-automation interaction. The purpose of this paper is to offer guidance for quantifying the shared cognition of diverse agents in cyber security. The recommendations presented can inform the development of automated aids to SA as well as illustrate paths for future empirical research.
Keywords: cognition; security of data; SA; cyber security; data center automation; diverse agents; shared cognition; situation awareness measurement; Automation; Autonomous agents; Cognition; Computer security; Data models; Sociotechnical systems; Situation awareness; cognition; cyber security; information security; teamwork (ID#: 15-6078)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816551&isnumber=6816529

 

Sirsikar, S.; Salunkhe, J., "Analysis of Data Hiding Using Digital Image Signal Processing," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on,  vol., no., pp. 134, 139, 9-11 Jan. 2014. doi:10.1109/ICESC.2014.28
Abstract: Data hiding process embeds data into digital media for the purpose of security. Digital image is one of the best media to store data. It provides large capacity for hiding secret information which results into stego-image imperceptible to human vision, a novel steganographic approach based on data hiding method such as pixel-value differencing. This method provides both high embedding capacity and outstanding imperceptibility for the stego-image. In this paper, different image processing techniques are described for data hiding related to pixel value differencing. Pixel Value Differencing based techniques is carried out to produce modified data hiding method. Hamming is an error correcting method which is useful to hide some information where lost bit are detected and corrected. OPAP is used to minimize embedding error thus quality of stego-image is improved without disturbing secret data. ZigZag method enhances security and quality of image. In modified method Hamming, OPAP and ZigZag methods are combined. In adaptive method image is divided into blocks and then data will be hidden. Objective of the proposed work is to increase the stego-image quality as well as increase capacity of secret data. Result analysis compared for BMP images only, with calculation of evaluation metrics i.e. MSE, PSNR and SSIM.

Keywords: image processing; steganography; BMP images; MSE; OPAP; PSNR; SSIM; ZigZag method; data hiding analysis; data hiding method; data hiding process; digital image signal processing; digital media; embedding capacity; error correcting method; human vision; pixel value differencing; pixel-value differencing; secret information hiding;security; steganographic approach; stego-image imperceptible; stego-image quality; Color; Cryptography; Digital images; Image quality; Measurement; PSNR; Data Hiding; Digital image; Pixel Value Differencing; Steganography (ID#: 15-6079)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745360&isnumber=6745317

 

Younis, A.A.; Malaiya, Y.K., "Using Software Structure to Predict Vulnerability Exploitation Potential," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, vol., no., pp. 13, 18, June 30 2014-July 2 2014. doi:10.1109/SERE-C.2014.17
Abstract: Most of the attacks on computer systems are due to the presence of vulnerabilities in software. Recent trends show that number of newly discovered vulnerabilities still continue to be significant. Studies have also shown that the time gap between the vulnerability public disclosure and the release of an automated exploit is getting smaller. Therefore, assessing vulnerabilities exploitability risk is critical as it aids decision-makers prioritize among vulnerabilities, allocate resources, and choose between alternatives. Several methods have recently been proposed in the literature to deal with this challenge. However, these methods are either subjective, requires human involvement in assessing exploitability, or do not scale. In this research, our aim is to first identify vulnerability exploitation risk problem. Then, we introduce a novel vulnerability exploitability metric based on software structure properties viz.: attack entry points, vulnerability location, presence of dangerous system calls, and reachability. Based on our preliminary results, reachability and the presence of dangerous system calls appear to be a good indicator of exploitability. Next, we propose using the suggested metric as feature to construct a model using machine learning techniques for automatically predicting the risk of vulnerability exploitation. To build a vulnerability exploitation model, we propose using Support Vector Machines (SVMs). Once the predictor is built, given unseen vulnerable function and their exploitability features the model can predict whether the given function is exploitable or not.
Keywords: decision making; learning (artificial intelligence); reachability analysis; software metrics; support vector machines; SVM; attack entry points; computer systems; decision makers; machine learning; reachability; software structure; support vector machines; vulnerabilities exploitability risk; vulnerability exploitability metric; vulnerability exploitation model; vulnerability exploitation potential; vulnerability exploitation risk problem; vulnerability location; vulnerability public disclosure; Feature extraction; Predictive models; Security; Software; Software measurement; Support vector machines; Attack Surface; Machine Learning; Measurement; Risk Assessment; Software Security Metrics; Software Vulnerability (ID#: 15-6080)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901635&isnumber=6901618

 

Younis, A.A.; Malaiya, Y.K.; Ray, I., "Using Attack Surface Entry Points and Reachability Analysis to Assess the Risk of Software Vulnerability Exploitability," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, vol., no., pp.1, 8, 9-11 Jan. 2014. doi:10.1109/HASE.2014.10
Abstract: An unpatched vulnerability can lead to security breaches. When a new vulnerability is discovered, it needs to be assessed so that it can be prioritized. A major challenge in software security is the assessment of the potential risk due to vulnerability exploitability. CVSS metrics have become a de facto standard that is commonly used to assess the severity of a vulnerability. The CVSS Base Score measures severity based on exploitability and impact measures. CVSS exploitability is measured based on three metrics: Access Vector, Authentication, and Access Complexity. However, CVSS exploitability measures assign subjective numbers based on the views of experts. Two of its factors, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence we do not know if it considers software properties as a factor. In this paper, we propose an approach that assesses the risk of vulnerability exploitability based on two software properties - attack surface entry points and reach ability analysis. A vulnerability is reachable if it is located in one of the entry points or is located in a function that is called either directly or indirectly by the entry points. The likelihood of an entry point being used in an attack can be assessed by using damage potential-effort ratio in the attack surface metric and the presence of system calls deemed dangerous. To illustrate the proposed method, five reported vulnerabilities of Apache HTTP server 1.3.0 have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can yield a risk assessment that can be different from the CVSS Base Score.
Keywords: reachability analysis; risk management; security of data; software metrics; Apache HTTP server 1.3.0; CVSS base score; CVSS exploitability; CVSS metrics; access complexity; access vector; attack surface entry point; attack surface metric; authentication; damage potential-effort ratio; reachability analysis; risk assessment; security breach; severity measurement; software security; software vulnerability exploitability ;Authentication; Complexity theory; Measurement; Servers; Software;Vectors; Attack Surface; CVSS Metrics; Measurement; Risk assessment; Software Security Metrics; Software Vulnerability (ID#: 15-6081)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754581&isnumber=6754569

 

Akbar, M.; Sukmana, H.T.; Khairani, D., "Models and Software Measurement Using Goal/Question/Metric Method and CMS Matrix Parameter (Case Study Discussion Forum)," Cyber and IT Service Management (CITSM), 2014 International Conference on, vol., no., pp. 34, 38, 3-6 Nov. 2014. doi:10.1109/CITSM.2014.7042171
Abstract: Existence of a CMS as a tool in making a website has been used extensively by the communities. Currently, there are many CMS available as options, especially CMS bulletin board. The number of options is an obstacle for someone to choose a suitable CMS to fulfill their needs. Because of the lack of research on this CMS bulletin board comparison, this research tries to compare and search the best CMS bulletin board. This research uses metrics for modeling and software measurement to identify the characteristics of existing CMS bulletin board. This research used Goal/Question/Metric (GQM) for modelling method and CMS Matrix as the parameters. As for the CMS bulletin board, in this study, we choose PhpBB, MyBB, and SMF. The results of this study indicate that SMF bulletin board has the best score compared to MyBB and phpBB CMS bulletin board.
Keywords: content management; software development management; software metrics; CMS bulletin board; CMS matrix parameter; GQM; MyBB; PhpBB; SMF; Website; goal-question-metric method; software measurement; Browsers; Databases; Operating systems; Security; Software measurement; CMS; Software Measurement (ID#: 15-6082)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042171&isnumber=7042158
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Security Scalability and Big Data, 2014

 

 
SoS Logo

Security Scalability and Big Data, 2014


Scalability is a hard problem in the Science of Security. Applied to Big Data, the problems of scaling security systems are compounded. The work cited here addresses the problem and was presented in 2014.


Eberle, W.; Holder, L., "A Partitioning Approach to Scaling Anomaly Detection in Graph Streams," Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 17, 24, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004367
Abstract: Due to potentially complex relationships among heterogeneous data sets, recent research efforts have involved the representation of this type of complex data as a graph. For instance, in the case of computer network traffic, a graph representation of the traffic might consist of nodes representing computers and edges representing communications between the corresponding computers. However, computer network traffic is typically voluminous, or acquired in real-time as a stream of information. In previous work on static graphs, we have used a compression-based measure to find normative patterns, and then analyzed the close matches to the normative patterns to indicate potential anomalies. However, while our approach has demonstrated its effectiveness in a variety of domains, the issue of scalability has limited this approach when dealing with domains containing millions of nodes and edges. To address this issue, we propose a novel approach called Pattern Learning and Anomaly Detection on Streams, or PLADS, that is not only scalable to real-world data that is streaming, but also maintains reasonable levels of effectiveness in detecting anomalies. In this paper we present a partitioning and windowing approach that partitions the graph as it streams in over time and maintains a set of normative patterns and anomalies. We then empirically evaluate our approach using publicly available network data as well as a dataset that represents e-commerce traffic.
Keywords: data mining; data structures; graph theory; learning (artificial intelligence); pattern classification; security of data; PLADS approach; anomaly detection scaling; computer network traffic; data representation; e-commerce traffic representation; electronic commerce; graph stream; heterogeneous data set; information stream; normative pattern; partitioning approach; pattern learning and anomaly detection on streams; windowing approach; Big data; Computers; Image edge detection; Internet; Scalability; Telecommunication traffic; Graph-based; anomaly detection; knowledge discovery; streaming data (ID#: 15-5786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004367&isnumber=7004197

 

Sokolov, V.; Alekseev, I.; Mazilov, D.; Nikitinskiy, M., "A Network Analytics System in the SDN," Science and Technology Conference (Modern Networking Technologies) (MoNeTeC), 2014 First International, vol., no., pp. 1, 3, 28-29 Oct. 2014. doi:10.1109/MoNeTeC.2014.6995603
Abstract: The emergence of virtualization and security problems of the network services, their lack of scalability and flexibility force network operators to look for “smarter” tools for network design and management. With the continuous growth of the number of subscribers, the volume of traffic and competition at the telecommunication market, there is a stable interest in finding new ways to identify weak points of the existing architecture, preventing the collapse of the network as well as evaluating and predicting the risks of problems in the network. To solve the problems of increasing the fail-safety and the efficiency of the network infrastructure, we offer to use the analytical software in the SDN context.
Keywords: computer network management; computer network security; network analysers; software defined networking; virtualisation; SDN context; analytical software; fail-safety; force network operators; network analytics system; network design; network infrastructure; network management; network services; security problems; smarter tools; software-defined network; telecommunication market; virtualization; Bandwidth; Data models; Monitoring; Network topology; Ports (Computers); Protocols; Software systems; analysis; analytics; application programming interface; big data; dynamic network model; fail-safety; flow; flow table; heuristic; load balancing; monitoring;network statistics; network topology; openflow; protocol; sdn; smart tool; software system; software-defined networking; weighted graph (ID#: 15-5787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6995603&isnumber=6995568

 

Chenhui Li; Baciu, G., "VALID: A Web Framework for Visual Analytics of Large Streaming Data," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 686, 692, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.89
Abstract: Visual analytics of increasingly large data sets has become a challenge for traditional in-memory and off-line algorithms as well as in the cognitive process of understanding features at various scales of resolution. In this paper, we attempt a new web-based framework for the dynamic visualization of large data. The framework is based on the idea that no physical device can ever catch up to the analytical demand and the physical requirements of large data. Thus, we adopt a data streaming generator model that serializes the original data into multiple streams of data that can be contained on current hardware. Thus, the scalability of the visual analytics of large data is inherent in the streaming architecture supported by our platform. The platform is based on the traditional server-client model. However, the platform is enhanced by effective analytical methods that operate on data streams, such as binned points and bundling lines that reduce and enhance large streams of data for effective interactive visualization. We demonstrate the effectiveness of our framework on different types of large datasets.
Keywords: Big Data; Internet; client-server systems; data analysis; data visualisation; interactive systems; media streaming; Big Data visualization; VALID; Web framework; data streaming generator model; dynamic data visualization; interactive visualization; server-client model; streaming architecture;  Conferences; Privacy; Security; big data; dynamic visualization; streaming data; visual analytics (ID#: 15-5788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011313&isnumber=7011202

 

Haltas, F.; Uzun, E.; Siseci, N.; Posul, A.; Emre, B., "An Automated Bot Detection System through Honeypots for Large-Scale," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, vol., no., pp. 255, 270, 3-6 June 2014. doi:10.1109/CYCON.2014.6916407
Abstract: One of the purposes of active cyber defense systems is identifying infected machines in enterprise networks that are presumably root cause and main agent of various cyber-attacks. To achieve this, researchers have suggested many detection systems that rely on host-monitoring techniques and require deep packet inspection or which are trained by malware samples by applying machine learning and clustering techniques. To our knowledge, most approaches are either lack of being deployed easily to real enterprise networks, because of practicability of their training system which is supposed to be trained by malware samples or dependent to host-based or deep packet inspection analysis which requires a big amount of storage capacity for an enterprise. Beside this, honeypot systems are mostly used to collect malware samples for analysis purposes and identify coming attacks. Rather than keeping experimental results of bot detection techniques as theory and using honeypots for only analysis purposes, in this paper, we present a novel automated bot-infected machine detection system BFH (BotFinder through Honeypots), based on BotFinder, that identifies infected hosts in a real enterprise network by learning approach. Our solution, relies on NetFlow data, is capable of detecting bots which are infected by most-recent malwares whose samples are caught via 97 different honeypot systems. We train BFH by created models, according to malware samples, provided and updated by 97 honeypot systems. BFH system automatically sends caught malwares to classification unit to construct family groups. Later, samples are automatically given to training unit for modeling and perform detection over NetFlow data. Results are double checked by using full packet capture of a month and through tools that identify rogue domains. Our results show that BFH is able to detect infected hosts with very few false-positive rates and successful on handling most-recent malware families since it is fed by 97 Honeypot and it supports large networks with scalability of Hadoop infrastructure, as deployed in a large-scale enterprise network in Turkey.
Keywords: invasive software; learning (artificial intelligence); parallel processing; pattern clustering; BFH; Hadoop infrastructure; NetFlow data; active cyber defense systems; automated bot detection system; bot detection techniques; bot-infected machine detection system; botfinder through honeypots; clustering technique; cyber-attacks; deep packet inspection; enterprise networks; honeypot systems; host-monitoring techniques; learning approach; machine learning technique; malware; Data models; Feature extraction; Malware; Monitoring; Scalability; Training; Botnet; NetFlow analysis; honeypots; machine learning (ID#: 15-5789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916407&isnumber=6916383

 

Irudayasamy, A.; Lawrence, A., "Enhanced Anonymization Algorithm to Preserve Confidentiality of Data in Public Cloud," Information Society (i-Society), 2014 International Conference on, vol., no., pp. 86, 91, 10-12 Nov. 2014.  doi:10.1109/i-Society.2014.7009017
Abstract: Cloud computing offers immense computation control and storing volume which permit users to organize applications. Many confidential and sensitive presentations like health facilities are constructed on cloud for monetary and working expediency. Generally, information in these requests is masked to safeguard the owner's confidential information, but such information can be possibly despoiled when new information is added to it. Preserving the confidentiality over distributed data sets is still a big challenge in the cloud environment because most of this information are huge and ranges through many storage nodes. Prevailing methods undergo reduced scalability and incompetence since information is assimilated and accesses all data repeatedly when apprises is done. In this paper, an efficient hash centered quasi-identifier anonymization method is introduced to confirm the confidentiality of the sensitive information and attain great value over spread data sets on cloud. Quasi-identifiers, which signify the groups of anonymized data, are hashed for adeptness. Consequently, a procedure is framed to fulfill this methodology. By this method, when deployed, results validate the effectiveness of confidential conservation on huge data sets that can be amended considerably over existing methods.
Keywords: cloud computing; cryptography; computation control; data confidentiality preservation; distributed data sets; enhanced anonymization algorithm; hash centered quasi-identifier anonymization method; prevailing methods; public cloud computing; sensitive information confidentiality; storage nodes; storing volume; Cloud computing; Computer science; Conferences; Distributed databases; Privacy; Societies; Taxonomy; Cloud Computing; Data anonymization; encryption; privacy; security (ID#: 15-5790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009017&isnumber=7008990

 

Hassan, S.; Abbas Kamboh, A.; Azam, F., "Analysis of Cloud Computing Performance, Scalability, Availability, & Security," Information Science and Applications (ICISA), 2014 International Conference on, vol., no., pp. 1, 5, 6-9 May 2014. doi:10.1109/ICISA.2014.6847363
Abstract: Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
Keywords: cloud computing; parallel processing; security of data; virtual machines; Internet; parallel computing; scalability; security; virtual machine; Availability; Cloud computing; Computer hacking; Scalability (ID#: 15-5791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847363&isnumber=6847317

 

Grolinger, K.; Hayes, M.; Higashino, W.A.; L'Heureux, A.; Allison, D.S.; Capretz, M.A.M., "Challenges for MapReduce in Big Data," Services (SERVICES), 2014 IEEE World Congress on, vol., no., pp. 182, 189, June 27 2014-July 2 2014. doi:10.1109/SERVICES.2014.41
Abstract: In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.
Keywords: Big Data; SQL; data analysis; data privacy; learning (artificial intelligence); parallel programming; relational databases; security of data; storage management; Big Data analytics; Big Data community; Big Data project management; Big Data project planning; MapReduce paradigm; NoSQL stores; data security; data storage; interactive analytics; machine learning; massive data sets; massively distributed execution; massively parallel execution; online processing; relational databases; Algorithm design and analysis; Big data; Data models; Data visualization; Memory; Scalability; Security; Big Data Analytics; Interactive Analytics; Machine Learning; MapReduce; NoSQL; Online Processing; Privacy; Security (ID#: 15-5792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903263&isnumber=6903223

 

Balusamy, M.; Muthusundari, S., "Data Anonymization through Generalization Using Map Reduce on Cloud," Computer Communication and Systems (CCCS), 2014 International Conference on, pp. 039, 042, 20-21 Feb. 2014. doi:10.1109/ICCCS.2014.7068164
Abstract: Now a day's cloud computing provides lot of computation power and storage capacity to the users can be share their private data. To providing the security to the users sensitive data is challenging and difficult one in a cloud environment. K-anonymity approach as far as used for providing privacy to users sensitive data, but cloud can be greatly increases in a big data manner. In the existing, top-town specialization approach to make the privacy of users sensitive data. When the scalability of users data increase means top-town specialization technique is difficult to preserve the sensitive data and provide security to users data. Here we propose the specialization approach through generalization to preserve the sensitive data and provide the security against scalability in an efficient way with the help of map-reduce. Our approach is founding better solution than existing approach in a scalable and efficient way to provide security to users data.
Keywords: cloud computing; data privacy; parallel processing; MapReduce; cloud environment; computation power; data anonymization; generalization; k-anonymity approach; private data sharing; security; storage capacity; top-town specialization approach; user sensitive data privacy; users data scalability; Cloud computing; Computers; Conferences; Data privacy; Privacy; Scalability; Security; Generalization; K-anonymity; Map-Reduce; Specialization; big data (ID#: 15-5793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7068164&isnumber=7068154

 

Choi, A.J., "Internet of Things: Evolution towards a Hyper-Connected Society," Solid-State Circuits Conference (A-SSCC), 2014 IEEE Asian, vol., no., pp. 5, 8, 10-12 Nov. 2014. doi:10.1109/ASSCC.2014.7008846
Abstract: Internet of Things is expected to encompass every aspect of our lives and to generate a paradigm shift towards a hyper-connected society. As more things are connected to the Internet, larger amount of data are generated and processed into useful actions that can make our lives safer and easier. Since IoT generate heavy traffics, it induces several challenges to next generation network. Therefore, IoT infrastructure should be designed in terms of flexibility and scalability. In addition, cloud computing and big data analytics are being integrated. They allow network to change itself much faster to service requirements with better operational efficiency and intelligence. IoT should also be vertically optimized from device to application in order to provide ultra-low power operation, cost-effectiveness, and service reliability with ensuring full security across the entire signal path. In this paper we address IoT challenges and technological requirements from the service provider perspective.
Keywords: Big Data; Internet; Internet of Things; cloud computing; computer network security; data analysis; data integration; next generation networks; reliability; IoT infrastructure; big data analytics; cost-effectiveness; hyper-connected society; next generation network; service reliability; ultra-low power operation; Business; Cloud computing; Intelligent sensors; Long Term Evolution; Security; IoT; flexiblity; scalability; security (ID#: 15-5794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7008846&isnumber=7008838

 

Ge Ma; Zhen Chen; Junwei Cao; Zhenhua Guo; Yixin Jiang; Xiaobin Guo, "A Tentative Comparison on CDN and NDN," Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, vol., no., pp. 2893, 2898, 5-8 Oct. 2014. doi:10.1109/SMC.2014.6974369
Abstract: With the pretty prompt growth in Internet content, future Internet is emerging as the main usage shifting from traditional host-to-host model to content dissemination model, e.g. video makes up more than half of Internet traffic. ISPs, content providers and other third parties have widely deployed content delivery networks (CDNs) to support digital content distribution. Though CDN is an ad-hoc solution to the content dissemination problem, there are still big challenges, such as complicated control plane. By contrast, as a wholly new designed network architecture, named data networking (NDN) incorporates content delivery function in its network layer, its stateful routing and forwarding plane can effectively detect and adapt to the dynamic and ever-changing Internet. In this paper, we try to explore the similarities and differences between CDN and NDN. Hence, we evaluate the distribution efficiency, network security and protocol overhead between CDN and NDN. Especially in the implementation phase, we conduct their testbeds separately with the same topology to derive their performance of content delivery. Finally, summarizing our main results, we gather that: 1) NDN has its own advantage on lots of aspects, including security, scalability and quality of service (QoS); 2) NDN make full use of surrounding resources and is more adaptive to the dynamic and ever-changing Internet; 3) though CDN is a commercial and mature architecture, in some scenarios, NDN can perform better than CDN under the same topology and caching storage. In a word, NDN is practical to play an even greater role in the evolution of the Internet based on the massive distribution and retrieval in the future.
Keywords: Internet; quality of service; routing protocols; telecommunication traffic; CDN; ISP; Internet content; Internet traffic; NDN; QoS; complicated control plane; content delivery function; content delivery network; content dissemination model; content dissemination problem; content provider; digital content distribution; distribution efficiency; future Internet; host-to-host model; named data networking; network architecture; network security; pretty prompt growth; protocol overhead; quality of service; stateful routing and forwarding plane; usage shifting; Conferences; Cybernetics; Architecture; comparison; evaluation; named data networking (ID#: 15-5795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974369&isnumber=6973862
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Signature-Based Defenses, 2014

 

 
SoS Logo

Signature-Based Defenses, 2014

 

Research into the use of malware signatures to inform defensive methods is a standard research exercise for the Science of Security community. The work cited here was published in 2014.


Maria B. Line, Ali Zand, Gianluca Stringhini, Richard Kemmerer. “Targeted Attacks against Industrial Control Systems: Is the Power Industry Prepared?.” SEGS '14 Proceedings of the 2nd Workshop on Smart Energy Grid Security, November 2014, Pages 13-22. doi:10.1145/2667190.2667192
Abstract: Targeted cyber attacks are on the rise, and the power industry is an attractive target. Espionage and causing physical damage are likely goals of these targeted attacks. In the case of the power industry, the worst possible consequences are severe: large areas, including critical societal infrastructures, can suffer from power outages. In this paper, we try to measure the preparedness of the power industry against targeted attacks. To this end, we have studied well-known targeted attacks and created a taxonomy for them. Furthermore, we conduct a study, in which we interview six power distribution system operators (DSOs), to assess the level of cyber situation awareness among DSOs and to evaluate the efficiency and effectiveness of their currently deployed systems and practices for detecting and responding to targeted attacks. Our findings indicate that the power industry is very well prepared for traditional threats, such as physical attacks. However, cyber attacks, and especially sophisticated targeted attacks, where social engineering is one of the strategies used, have not been addressed appropriately so far. Finally, by understanding previous attacks and learning from them, we try to provide the industry with guidelines for improving their situation awareness and defense (both detection and response) capabilities.
Keywords: cyber situation awareness, incident management, industrial control systems, information security, interview study, power industry, preparedness, targeted attacks (ID#: 15-5954)
URL:  http://doi.acm.org/10.1145/2667190.2667192

 

Qian Chen, Sherif Abdelwahed. “Towards Realizing Self-Protecting SCADA Systems.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 105-108. doi:10.1145/2602087.2602113
Abstract: SCADA (supervisory control and data acquisition) systems are prime cyber attack targets due to potential impacts on properties, economies, and human lives. Current security solutions, such as firewalls, access controls, and intrusion detection and response systems, can protect SCADA systems from cyber assaults (e.g., denial of service attacks, SQL injection attacks, and spoofing attacks), but they are far from perfect. A new technology is emerging to enable self-protection in SCADA systems. Self-protecting SCADA systems are typically an integration of system behavior monitoring, attack estimation and prevention, known and unknown attack detection, live forensics analysis, and system behavior regulation with appropriate responses. This paper first discusses the key components of a self-protecting SCADA system and then surveys the state-of-the-art research and techniques to the realization of such systems.
Keywords: autonomic computing, cybersecurity, self-protection (ID#: 15-5955)
URL:  http://doi.acm.org/10.1145/2602087.2602113

 

Vijay Anand. “Intrusion Detection: Tools, Techniques and Strategies.” SIGUCCS '14 Proceedings of the 42nd Annual ACM SIGUCCS Conference on User Services, November 2014, Pages 69-73. doi:10.1145/2661172.2661186
Abstract: Intrusion detection is an important aspect of modern cyber-enabled infrastructure in identifying threats to digital assets. Intrusion detection encompasses tools, techniques and strategies to recognize evolving threats thereby contributing to a secure and trustworthy computing framework. There are two primary intrusion detection paradigms, signature pattern matching and anomaly detection. The paradigm of signature pattern matching encompasses the identification of known threat sequences of causal events and matching it to incoming events. If the pattern of incoming events matches the signature of an attack there is a positive match which can be labeled for further processing of countermeasures. The paradigm of anomaly detection is based on the premise that an attack signature is unknown. Events can deviate from normal digital behavior or can inadvertently give out information in normal event processing. These stochastic events have to be evaluated by variety of techniques such as artificial intelligence, prediction models etc. before identifying potential threats to the digital assets in a cyber-enabled system. Once a pattern is identified in the evaluation process after excluding false positives and negative this pattern can be classified as a signature pattern. This paper highlights a setup in an educational environment to effectively flag threats to the digital assets in the system using an intrusion detection framework. Intrusion detection framework comes in two primary formats a network intrusion detection system and a host intrusion detection system. In this paper we identify different publicly available tools of intrusion detection and their effectiveness in a test environment. This paper also looks at the mix of tools that can be deployed to effectively flag threats as they evolve. The effect of encryption in such setup and threat identification with encryption is also studied.
Keywords: anomaly, attacks, honeynet, honeypot, intrusion, pattern, sanitization, virtualized (ID#: 15-5956)
URL:  http://doi.acm.org/10.1145/2661172.2661186

 

Vasilis G. Tasiopoulos, Sokratis K. Katsikas. “Bypassing Antivirus Detection with Encryption.” PCI '14 Proceedings of the 18th Panhellenic Conference on Informatics, October 2014, Pages 1-2.  doi:10.1145/2645791.2645857
Abstract: Bypassing an antivirus is a common issue among ethical hackers and penetration testers. Several techniques have been—and are being—used to bypass antivirus software; an effective and efficient one is to encrypt the malware by using special purpose tools, called crypters. In this paper, a novel crypter, which is based on the latest techniques, and can bypass antivirus software is described. The crypter is based on a new architecture that enables it to provide a unique output every time it is used. Testing results indicate that the proposed crypter evades detection by all antivirus in all runs.
Keywords: Antivirus, Crypter, Encryption, Malware (ID#: 15-5957)
URL:  http://doi.acm.org/10.1145/2645791.2645857

 

Joshua Cazalas, J. Todd McDonald, Todd R. Andel, Natalia Stakhanova. “Probing the Limits of Virtualized Software Protection.” PPREW-4 Proceedings of the 4th Program Protection and Reverse Engineering Workshop, December 2014, Article No. 5. doi: 10.1145/2689702.2689707
Abstract: Virtualization is becoming a prominent field of research not only in distributed systems, but also in software protection and obfuscation. Software virtualization has given rise to advanced techniques that may provide intellectual property protection and anti-cloning resilience. We present results of an empirical study that answers whether integrity of execution can be preserved for process-level virtualization protection schemes in the face of adversarial analysis. Our particular approach considers exploits that target the virtual execution environment itself and how it interacts with the underlying host operating system and hardware. We give initial results that indicate such protection mechanisms may be vulnerable at the level where the virtualized code interacts with the underlying operating system. The resolution of whether such attacks can undermine security will help create better detection and analysis methods for malware that also employ software virtualization. Our findings help frame research for additional mitigation techniques using hardware-based integration or hybrid virtualization techniques that can better defend legitimate uses of virtualized software protection.
Keywords: Software protection, obfuscation, process-level virtualization, tamper resistance, virtualized code (ID#: 15-5958)
URL: http://doi.acm.org/10.1145/2689702.2689707

 

Tsung-Hsuan Ho, Daniel Dean, Xiaohui Gu, William Enck. “PREC: Practical Root Exploit Containment for Android Devices.” CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 187-198. doi:10.1145/2557547.2557563
Abstract: Application markets such as the Google Play Store and the Apple App Store have become the de facto method of distributing software to mobile devices. While official markets dedicate significant resources to detecting malware, state-of-the-art malware detection can be easily circumvented using logic bombs or checks for an emulated environment. We present a Practical Root Exploit Containment (PREC) framework that protects users from such conditional malicious behavior. PREC can dynamically identify system calls from high-risk components (e.g., third-party native libraries) and execute those system calls within isolated threads. Hence, PREC can detect and stop root exploits with high accuracy while imposing low interference to benign applications. We have implemented PREC and evaluated our methodology on 140 most popular benign applications and 10 root exploit malicious applications. Our results show that PREC can successfully detect and stop all the tested malware while reducing the false alarm rates by more than one order of magnitude over traditional malware detection algorithms. PREC is light-weight, which makes it practical for runtime on-device root exploit detection and containment.
Keywords: android, dynamic analysis, host intrusion detection, malware, root exploits (ID#: 15-5959)
URL: http://doi.acm.org/10.1145/2557547.2557563

 

Tobias Wüchner, Martín Ochoa, Alexander Pretschner. “Malware Detection with Quantitative Data Flow Graphs.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 271-282. doi:10.1145/2590296.2590319
Abstract: We propose a novel behavioral malware detection approach based on a generic system-wide quantitative data flow model. We base our data flow analysis on the incremental construction of aggregated quantitative data flow graphs. These graphs represent communication between different system entities such as processes, sockets, files or system registries. We demonstrate the feasibility of our approach through a prototypical instantiation and implementation for the Windows operating system. Our experiments yield encouraging results: in our data set of samples from common malware families and popular non-malicious applications, our approach has a detection rate of 96% and a false positive rate of less than 1.6%. In comparison with closely related data flow based approaches, we achieve similar detection effectiveness with considerably better performance: an average full system analysis takes less than one second.
Keywords: behavioral malware analysis, data flow tracking, intrusion detection, malware detection, quantitative data flows (ID#: 15-5960)
URL: http://doi.acm.org/10.1145/2590296.2590319

 

Mikhail Kazdagli, Ling Huang, Vijay Reddi, Mohit Tiwari. “Morpheus: Benchmarking Computational Diversity in Mobile Malware.” HASP '14 Proceedings of the Third Workshop on Hardware and Architectural Support for Security and Privacy, June 2014, Article No. 3. doi:10.1145/2611765.2611767
Abstract: Computational characteristics of a program can potentially be used to identify malicious programs from benign ones. However, systematically evaluating malware detection techniques, especially when malware samples are hard to run correctly and can adapt their computational characteristics, is a hard problem. We introduce Morpheus—a benchmarking tool that includes both real mobile malware and a synthetic malware generator that can be configured to generate a computationally diverse malware sample-set—as a tool to evaluate computational signatures based malware detection. Morpheus also includes a set of computationally diverse benign applications that can be used to repackage malware into, along with a recorded trace of over 1 hour long realistic human usage for each app that can be used to replay both benign and malicious executions. The current Morpheus prototype targets Android applications and malware samples. Using Morpheus, we quantify the computational diversity in malware behavior and expose opportunities for dynamic analyses that can detect mobile malware. Specifically, the use of obfuscation and encryption to thwart static analyses causes the malicious execution to be more distinctive—a potential opportunity for detection. We also present potential challenges, specifically, minimizing false positives that can arise due to diversity of benign executions.
Keywords: mobile malware, performance counters, security (ID#: 15-5961)
URL:  http://doi.acm.org/10.1145/2611765.2611767

 

Mingshen Sun, Min Zheng, John C. S. Lui, Xuxian Jiang. “Design and Implementation of an Android Host-Based Intrusion Prevention System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 226-235. doi:10.1145/2664243.2664245
Abstract: Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---"Patronus." Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption.
Keywords: (not provided) (ID#: 15-5962)
URL:  http://doi.acm.org/10.1145/2664243.2664245

 

Sean Whalen, Nathaniel Boggs, Salvatore J. Stolfo. “Model Aggregation for Distributed Content Anomaly Detection.” AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 61-71. doi:10.1145/2666652.2666660
Abstract: Cloud computing offers a scalable, low-cost, and resilient platform for critical applications. Securing these applications against attacks targeting unknown vulnerabilities is an unsolved challenge. Network anomaly detection addresses such zero-day attacks by modeling attributes of attack-free application traffic and raising alerts when new traffic deviates from this model. Content anomaly detection (CAD) is a variant of this approach that models the payloads of such traffic instead of higher level attributes. Zero-day attacks then appear as outliers to properly trained CAD sensors. In the past, CAD was unsuited to cloud environments due to the relative overhead of content inspection and the dynamic routing of content paths to geographically diverse sites. We challenge this notion and introduce new methods for efficiently aggregating content models to enable scalable CAD in dynamically-pathed environments such as the cloud. These methods eliminate the need to exchange raw content, drastically reduce network and CPU overhead, and offer varying levels of content privacy. We perform a comparative analysis of our methods using Random Forest, Logistic Regression, and Bloom Filter-based classifiers for operation in the cloud or other distributed settings such as wireless sensor networks. We find that content model aggregation offers statistically significant improvements over non-aggregate models with minimal overhead, and that distributed and non-distributed CAD have statistically indistinguishable performance. Thus, these methods enable the practical deployment of accurate CAD sensors in a distributed attack detection infrastructure.
Keywords: anomaly detection, machine learning, model aggregation (ID#: 15-5963)
URL: http://doi.acm.org/10.1145/2666652.2666660

 

Tamas K. Lengyel, Steve Maresca, Bryan D. Payne, George D. Webster, Sebastian Vogl, Aggelos Kiayias. “Scalability, Fidelity and Stealth in the DRAKVUF Dynamic Malware Analysis System.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 386-395. doi:10.1145/2664243.2664252
Abstract: Malware is one of the biggest security threats on the Internet today and deploying effective defensive solutions requires the rapid analysis of a continuously increasing number of malware samples. With the proliferation of metamorphic malware the analysis is further complicated as the efficacy of signature-based static analysis systems is greatly reduced. While dynamic malware analysis is an effective alternative, the approach faces significant challenges as the ever increasing number of samples requiring analysis places a burden on hardware resources. At the same time modern malware can both detect the monitoring environment and hide in unmonitored corners of the system. In this paper we present DRAKVUF, a novel dynamic malware analysis system designed to address these challenges by building on the latest hardware virtualization extensions and the Xen hypervisor. We present a technique for improving stealth by initiating the execution of malware samples without leaving any trace in the analysis machine. We also present novel techniques to eliminate blind-spots created by kernel-mode rootkits by extending the scope of monitoring to include kernel internal functions, and to monitor file-system accesses through the kernel's heap allocations. With extensive tests performed on recent malware samples we show that DRAKVUF achieves significant improvements in conserving hardware resources while providing a stealthy, in-depth view into the behavior of modern malware.
Keywords: dynamic malware analysis, virtual machine introspection (ID#: 15-5964)
URL: http://doi.acm.org/10.1145/2664243.2664252

 

David Barrera, Daniel McCarney, Jeremy Clark, Paul C. van Oorschot. “Baton: Certificate Agility for Android's Decentralized Signing Infrastructure.” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks, July 2014, Pages 1-12.   doi:10.1145/2627393.2627397
Abstract: Android's trust-on-first-use application signing model associates developers with a fixed code signing certificate, but lacks a mechanism to enable transparent key updates or certificate renewals. The model allows application updates to be recognized as authorized by a party with access to the original signing key. However, changing keys or certificates requires that end users manually uninstall/reinstall apps, losing all non-backed up user data. In this paper, we show that with appropriate OS support, developers can securely and without user intervention transfer signing authority to a new signing key. Our proposal, Baton, modifies Android's app installation framework enabling key agility while preserving backwards compatibility with current apps and current Android releases. Baton is designed to work consistently with current UID sharing and signature permission requirements. We discuss technical details of the Android-specific implementation, as well as the applicability of the Baton protocol to other decentralized environments.
Keywords: android, application signing, mobile operating systems (ID#: 15-5965)
URL: http://doi.acm.org/10.1145/2627393.2627397

 

Todd R. Andel, Lindsey N. Whitehurst, Jeffrey T. McDonald. “Software Security and Randomization through Program Partitioning and Circuit Variation.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 79-86. doi:10.1145/2663474.2663484
Abstract: The commodity status of Field Programmable Gate Arrays (FPGAs) has allowed computationally intensive algorithms, such as cryptographic protocols, to take advantage of faster hardware speed while simultaneously leveraging the reconfigurability and lower cost of software. Numerous security applications have been transitioned into FPGA implementations allowing security applications to operate at real-time speeds, such as firewall and packet scanning on high speed networks. However, the utilization of FPGAs to directly secure software vulnerabilities is seemingly non-existent. Protecting program integrity and confidentiality is crucial as malicious attacks through injected code are becoming increasingly prevalent. This paper lays the foundation of continuing research in how to protect software by partitioning critical sections using reconfigurable hardware. This approach is similar to a traditional coprocessor approach to scheduling opcodes for execution on specialized hardware as opposed to running on the native processor. However, the partitioned program model enables the programmer the ability to split portions of an application to reconfigurable hardware at compile time. The fundamental underlying hypothesis is that synthesizing portions of programs onto hardware can mitigate potential software vulnerabilities. Further, this approach provides an avenue for randomization or diversity for software layout and circuit variation.
Keywords: circuit variation, program protection, reconfigurable hardware, secure software, software partitioning (ID#: 15-5966)
URL:  http://doi.acm.org/10.1145/2663474.2663484
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Virtualization Privacy Auditing

 

 
SoS Logo

Virtualization Privacy Auditing


With the growth of Cloud applications, the problems of security and privacy are growing. Determining whether security is working and privacy is being protected requires the ability to successfully audit.  Such audits not only help to determine the protection, but also provide data to inform the development of metrics. The research presented here is current in 2014 as of July 21.


Denzil Ferreira, Vassilis Kostakos, Alastair R. Beresford, Janne Lindqvist, Anind K. Dey. “Securacy: An Empirical Investigation of Android Applications' Network Usage, Privacy and Security.” WiSec '15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, June 2015, Article No. 11. doi:10.1145/2766498.2766506
Abstract: Smartphone users do not fully know what their apps do. For example, an applications' network usage and underlying security configuration is invisible to users. In this paper we introduce Securacy, a mobile app that explores users' privacy and security concerns with Android apps. Securacy takes a reactive, personalized approach, highlighting app permission settings that the user has previously stated are concerning, and provides feedback on the use of secure and insecure network communication for each app. We began our design of Securacy by conducting a literature review and in-depth interviews with 30 participants to understand their concerns. We used this knowledge to build Securacy and evaluated its use by another set of 218 anonymous participants who installed the application from the Google Play store. Our results show that access to address book information is by far the biggest privacy concern. Over half (56.4%) of the connections made by apps are insecure, and the destination of the majority of network traffic is North America, regardless of the location of the user. Our app provides unprecedented insight into Android applications' communications behavior globally, indicating that the majority of apps currently use insecure network connections.
Keywords: applications, context, experience sampling, network, privacy (ID#: 15-5942)
URL: http://doi.acm.org/10.1145/2766498.2766506

 

Syed Rizvi, Jungwoo Ryoo, John Kissell, Bill Aiken. “A Stakeholder-Oriented Assessment Index for Cloud Security Auditing.” IMCOM '15 Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication, January 2015, Article No. 55. doi:10.1145/2701126.2701226
Abstract: Cloud computing is an emerging computing model that provides numerous advantages to organizations (both service providers and customers) in terms of massive scalability, lower cost, and flexibility, to name a few. Despite these technical and economical advantages of cloud computing, many potential cloud consumers are still hesitant to adopt cloud computing due to security and privacy concerns. This paper describes some of the unique cloud computing security factors and subfactors that play a critical role in addressing cloud security and privacy concerns. To mitigate these concerns, we develop a security metric tool to provide information to cloud users about the security status of a given cloud vendor. The primary objective of the proposed metric is to produce a security index that describes the security level accomplished by an evaluated cloud computing vendor. The resultant security index will give confidence to different cloud stakeholders and is likely to help them in decision making, increase the predictability of the quality of service, and allow appropriate proactive planning if needed before migrating to the cloud. To show the practicality of the proposed metric, we provide two case studies based on the available security information about two well-known cloud service providers (CSP). The results of these case studies demonstrated the effectiveness of the security index in determining the overall security level of a CSP with respect to the security preferences of cloud users.
Keywords: cloud auditing, cloud security, data privacy, security metrics (ID#: 15-5943)
URL: http://doi.acm.org/10.1145/2701126.2701226

 

V. Padmapriya, J. Amudhavel, M. Thamizhselvi, K. Bakkiya, B. Sujitha, K. Prem Kumar. “A Scalable Service Oriented Consistency Model for Cloud Environment (SSOCM).” ICARCSET '15 Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), March 2015, Article No. 24. doi:10.1145/2743065.2743089
Abstract: The cloud computing paradigm is located throughout the world which is not only used to gather the user's information but also allows the user to share the information among them. In the existing systems, they have discussed about trace-based verification and auditing consistency model on the worldwide scale, which is very expensive to achieve strong consistency. Most of the consistency is achieved during security operations in the cloud domain with violations. Consistency is easy to integrate with multiple servers and even to maintain it under replication. In our proposed system, the users can be able to easily assess the quality of the cloud service and also choose a precise consistency service provider (CSP) among various applicants. Here a theoretical study of consistency model in cloud computing is conducted thoroughly. Finally, we devise an algorithm and a theorem such as: Heuristic Auditing Strategy (HAS) along with the Consistency, Availability and Partition tolerance (CAP) theorem, where the users can easily assess the best quality of the cloud service and also to choose a right consistency service provider (CSP) among various candidates.
Keywords: Consistency, auditing consistency, consistency service provider (CSP), heuristic strategy (HAS) consistency availability and partition tolerance (CAP) (ID#: 15-5944)
URL:  http://doi.acm.org/10.1145/2743065.2743089

 

Shanhe Yi, Cheng Li, Qun Li. “A Survey of Fog Computing: Concepts, Applications and Issues.” Mobidata '15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 37-42. doi:10.1145/2757384.2757397
Abstract: Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.
Keywords: cloud computing, edge computing, fog computing, mobile cloud computing, mobile edge computing, review (ID#: 15-5945)
URL:  http://doi.acm.org/10.1145/2757384.2757397

 

Tianwei Zhang, Ruby B. Lee. “CloudMonatt: An Architecture for Security Health Monitoring and Attestation of Virtual Machines in Cloud Computing.” ISCA '15 Proceedings of the 42nd Annual International Symposium on Computer Architecture, June 2015, Pages 362-374. doi:10.1145/2749469.2750422
Abstract: Cloud customers need guarantees regarding the security of their virtual machines (VMs), operating within an Infrastructure as a Service (IaaS) cloud system. This is complicated by the customer not knowing where his VM is executing, and on the semantic gap between what the customer wants to know versus what can be measured in the cloud. We present an architecture for monitoring a VM's security health, with the ability to attest this to the customer in an unforgeable manner. We show a concrete implementation of property-based attestation and a full prototype based on the OpenStack open source cloud software.
Keywords: (not provided) (ID#: 15-5946)
URL:  http://doi.acm.org/10.1145/2749469.2750422

 

Yubin Xia, Yutao Liu, Cheng Tan, Mingyang Ma, Haibing Guan, Binyu Zang, Haibo Chen.TinMan: Eliminating Confidential Mobile Data Exposure with Security Oriented Offloading.” EuroSys '15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 27. doi:10.1145/2741948.2741977
Abstract: The wide adoption of smart devices has stimulated a fast shift of security-critical data from desktop to mobile devices. However, recurrent device theft and loss expose mobile devices to various security threats and even physical attacks. This paper presents TinMan, a system that protects confidential data such as web site password and credit card number (we use the term cor to represent these data, which is short for Confidential Record) from being leaked or abused even under device theft. TinMan separates accesses of cor from the rest of the functionalities of an app, by introducing a trusted node to store cor and offloading any code from a mobile device to the trusted node to access cor. This completely eliminates the exposure of cor on the mobile devices. The key challenges to TinMan include deciding when and how to efficiently and transparently offload execution; TinMan addresses these challenges with security-oriented offloading with a low-overhead tainting scheme called asymmetric tainting to track accesses to cor to trigger offloading, as well as transparent SSL session injection and TCP pay-load replacement to offload accesses to cor. We have implemented a prototype of TinMan based on Android and demonstrated how TinMan protects the information of user's bank account and credit card number without modifying the apps. Evaluation results also show that TinMan incurs only a small amount of performance and power overhead.
Keywords: (not provided) (ID#: 15-5947)
URL: http://doi.acm.org/10.1145/2741948.2741977

 

Marshini Chetty, Hyojoon Kim, Srikanth Sundaresan, Sam Burnett, Nick Feamster, W. Keith Edwards. “uCap: An Internet Data Management Tool for the Home. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 3093-3102.  doi:10.1145/2702123.2702218
Abstract: Internet Service Providers (ISPs) have introduced "data caps", or quotas on the amount of data that a customer can download during a billing cycle. Under this model, Internet users who reach a data cap can be subject to degraded performance, extra fees, or even temporary interruption of Internet service. For this reason, users need better visibility into and control over their Internet usage to help them understand what uses up data and control how these quotas are reached. In this paper, we present the design and implementation of a tool, called uCap, to help home users manage Internet data. We conducted a field trial of uCap in 21 home networks in three countries and performed an in-depth qualitative study of ten of these homes. We present the results of the evaluation and implications for the design of future Internet data management tools.
Keywords: bandwidth caps, data caps, home networking tools (ID#: 15-5948)
URL: http://doi.acm.org/10.1145/2702123.2702218

 

Robert Cowles, Craig Jackson, Von Welch. “Facilitating Scientific Collaborations by Delegating Identity Management: Reducing Barriers & Roadmap for Incremental Implementation.” CLHS '15 Proceedings of the 2015 Workshop on Changing Landscapes in HPC Security, April 2015, Pages 15-19. doi:10.1145/2752499.2752501
Abstract: DOE Labs are often presented with conflicting requirements for providing services to scientific collaboratories. An identity management model involving transitive trust is increasingly common. We show how existing policies allow for increased delegation of identity management within an acceptable risk management framework. Specific topics addressed include deemed exports, DOE orders, Inertia and Risk, Traceability, and Technology Limitations. Real life examples of an incremental approach to implementing transitive trust are presented.
Keywords: access control, cyber security, delegation, identity, identity management, risk management, transitive trust (ID#: 15-5949)
URL: http://doi.acm.org/10.1145/2752499.2752501

 

Qiang Liu, Edith C.-H. Ngai, Xiping Hu, Zhengguo Sheng, Victor C.M. Leung, Jianping Yin. “SH-CRAN: Hierarchical Framework to Support Mobile Big Data Computing in a Secure Manner.” Mobidata '15 Proceedings of the 2015 Workshop on Mobile Big Data, June 2015, Pages 19-24.  doi:10.1145/2757384.2757388
Abstract: The heterogeneous cloud radio access network (H-CRAN) has been emerging as a cost-effective solution supporting huge volumes of mobile traffic in the big data era. This paper investigates potential security challenges on H-CRAN and analyzes their likelihoods and difficulty levels. Typically, the security threats in H-CRAN can be categorized into three groups, i.e., security threats towards remote radio heads (RRHs), those towards the radio cloud infrastructure and towards backhaul networks. To overcome challenges made by the security threats, we propose a hierarchical security framework called Secure H-CRAN (SH-CRAN) to protect the H-CRAN system against the potential threats. Specifically, the architecture of SH-CRAN contains three logically independent secure domains (SDs), which are the SDs of radio cloud infrastructure, RRHs and backhauls. The notable merits of SH-CRAN include two aspects: (i) the proposed framework is able to provide security assurance for the evolving H-CRAN system, and (ii) the impacts of any failure are limited in one specific component of H-CRAN. The proposed SH-CRAN can be regarded as the basis of the future security mechanisms of mobile bag data computing.
Keywords: heterogeneous cloud radio access network, hierarchical security framework, mobile big data computing (ID#: 15-5950)
URL:   http://doi.acm.org/10.1145/2757384.2757388

 

Jun Wang, Zhiyun Qian, Zhichun Li, Zhenyu Wu, Junghwan Rhee, Xia Ning, Peng Liu, Guofei Jiang. “Discover and Tame Long-running Idling Processes in Enterprise Systems.” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 543-554. doi:10.1145/2714576.2714613
Abstract: Reducing attack surface is an effective preventive measure to strengthen security in large systems. However, it is challenging to apply this idea in an enterprise environment where systems are complex and evolving over time. In this paper, we empirically analyze and measure a real enterprise to identify unused services that expose attack surface. Interestingly, such unused services are known to exist and summarized by security best practices, yet such solutions require significant manual effort.  We propose an automated approach to accurately detect the idling (most likely unused) services that are in either blocked or bookkeeping states. The idea is to identify repeating events with perfect time alignment, which is the indication of being idling. We implement this idea by developing a novel statistical algorithm based on autocorrelation with time information incorporated. From our measurement results, we find that 88.5% of the detected idling services can be constrained with a simple syscall-based policy, which confines the process behaviors within its bookkeeping states. In addition, working with two IT departments (one of which is a cross validation), we receive positive feedbacks which show that about 30.6% of such services can be safely disabled or uninstalled directly. In the future, the IT department plan to incorporate the results to build a "smaller" OS installation image. Finally, we believe our measurement results raise the awareness of the potential security risks of idling services.
Keywords: attack surface reduction, autocorrelation, enterprise systems, idling service detection (ID#:15-5951)
URL:   http://doi.acm.org/10.1145/2714576.2714613

 

Patrick Colp, Jiawen Zhang, James Gleeson, Sahil Suneja, Eyal de Lara, Himanshu Raj, Stefan Saroiu, Alec Wolman. “Protecting Data on Smartphones and Tablets from Memory Attacks.” ASPLOS '15 Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 2015, Pages 177-189. doi:10.1145/2694344.2694380
Abstract: Smartphones and tablets are easily lost or stolen. This makes them susceptible to an inexpensive class of memory attacks, such as cold-boot attacks, using a bus monitor to observe the memory bus, and DMA attacks. This paper describes Sentry, a system that allows applications and OS components to store their code and data on the System-on-Chip (SoC) rather than in DRAM. We use ARM-specific mechanisms originally designed for embedded systems, but still present in today's mobile devices, to protect applications and OS subsystems from memory attacks.
Keywords: AES, DMA attack, android, arm, bus monitoring, cache, cold boot, encrypted RAM, encrypted memory, iRAM, nexus, tegra (ID#: 15-5952)
URL:   http://doi.acm.org/10.1145/2694344.2694380

 

Anjo Vahldiek-Oberwagner, Eslam Elnikety, Aastha Mehta, Deepak Garg, Peter Druschel, Rodrigo Rodrigues, Johannes Gehrke, Ansley Post. “Guardat: Enforcing Data Policies at the Storage Layer.” EuroSys '15 Proceedings of the Tenth European Conference on Computer Systems, April 2015, Article No. 13. doi:10.1145/2741948.2741958
Abstract: In today's data processing systems, both the policies protecting stored data and the mechanisms for their enforcement are spread over many software components and configuration files, increasing the risk of policy violation due to bugs, vulnerabilities and misconfigurations. Guardat addresses this problem. Users, developers and administrators specify file protection policies declaratively, concisely and separate from code, and Guardat enforces these policies by mediating I/O in the storage layer. Policy enforcement relies only on the integrity of the Guardat controller and any external policy dependencies. The semantic gap between the storage layer enforcement and per-file policies is bridged using cryptographic attestations from Guardat. We present the design and prototype implementation of Guardat, enforce example policies in a Web server, and show experimentally that its overhead is low.
Keywords: (not provided) (ID#: 15-5953)
URL:  http://doi.acm.org/10.1145/2741948.2741958
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.