International Security Related Conferences

 

 
SoS Logo

Conferences

 

The following pages provide highlights on Science of Security related research presented at the following International Conferences:

(ID#: 15-5614)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Conferences: Software Analysis, Evolution and Reengineering (SANER) Quebec, Canada

 
SoS Logo

International Conferences: Software Analysis, Evolution and Reengineering (SANER) Quebec, Canada

 

The 2015 IEEE 22nd International Conference Software Analysis, Evolution and Reengineering (SANER) was held in Montréal from 2-6 March 2015, at École Polytechnique de Montréal, Québec, Canada.  SANER is a research conference on the theory and practice of recovering information from existing software and systems. It explores innovative methods of extracting the many kinds of information that can be recovered from software, software engineering documents, and systems artifacts, and examines innovative ways of using this information in system renovation and program understanding.  Details about the conference can be found on its web page at: http://saner.soccerlab.polymtl.ca/doku.php?id=en:start   The presentations cited here relate specifically to the Science of Security.


 

Saied, M.A.; Benomar, O.; Abdeen, H.; Sahraoui, H., "Mining Multi-level API Usage Patterns," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 23, 32, 2-6 March 2015. doi: 10.1109/SANER.2015.7081812
Abstract: Software developers need to cope with complexity of Application Programming Interfaces (APIs) of external libraries or frameworks. However, typical APIs provide several thousands of methods to their client programs, and such large APIs are difficult to learn and use. An API method is generally used within client programs along with other methods of the API of interest. Despite this, co-usage relationships between API methods are often not documented. We propose a technique for mining Multi-Level API Usage Patterns (MLUP) to exhibit the co-usage relationships between methods of the API of interest across interfering usage scenarios. We detect multi-level usage patterns as distinct groups of API methods, where each group is uniformly used across variable client programs, independently of usage contexts. We evaluated our technique through the usage of four APIs having up to 22 client programs per API. For all the studied APIs, our technique was able to detect usage patterns that are, almost all, highly consistent and highly cohesive across a considerable variability of client programs.
Keywords: application program interfaces; data mining; software libraries; MLUP; application programming interface; multilevel API usage pattern mining; Clustering algorithms; Context; Documentation; Graphical user interfaces; Java; Layout; Security; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 15-5411)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081812&isnumber=7081802

 

Ladanyi, G.; Toth, Z.; Ferenc, R.; Keresztesi, T., "A Software Quality Model for RPG," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 91, 100, 2-6 March 2015. doi: 10.1109/SANER.2015.7081819
Abstract: The IBM i mainframe was designed to manage business applications for which the reliability and quality is a matter of national security. The RPG programming language is the most frequently used one on this platform. The maintainability of the source code has big influence on the development costs, probably this is the reason why it is one of the most attractive, observed and evaluated quality characteristic of all. For improving or at least preserving the maintainability level of software it is necessary to evaluate it regularly. In this study we present a quality model based on the ISO/IEC 25010 international standard for evaluating the maintainability of software systems written in RPG. As an evaluation step of the quality model we show a case study in which we explain how we integrated the quality model as a continuous quality monitoring tool into the business processes of a mid-size software company which has more than twenty years of experience in developing RPG applications.
Keywords: DP industry; IBM computers; IEC standards; ISO standards; automatic programming; report generators; software maintenance; software quality; software reliability; software standards; source code (software); IBM i mainframe; ISO/IEC 25010 international standard; RPG programming language; business applications management; business processes; continuous quality monitoring tool; development costs; mid-size software company; national security; quality characteristic; reliability; reporting program generator; software maintainability level; software quality model; source code maintainability; Algorithms; Cloning; Complexity theory; Measurement; Object oriented modeling; Software; Standards; IBM i mainframe; ISO/IEC 25010;RPG quality model; Software maintainability; case study (ID#: 15-5412)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081819&isnumber=7081802

 

Xiaoli Lian; Li Zhang, "Optimized Feature Selection Towards Functional And Non-Functional Requirements In Software Product Lines," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 191, 200, 2-6 March 2015. doi: 10.1109/SANER.2015.7081829
Abstract: As an important research issue in software product line, feature selection is extensively studied. Besides the basic functional requirements (FRs), the non-functional requirements (NFRs) are also critical during feature selection. Some NFRs have numerical constraints, while some have not. Without clear criteria, the latter are always expected to be the best possible. However, most existing selection methods ignore the combination of constrained and unconstrained NFRs and FRs. Meanwhile, the complex constraints and dependencies among features are perpetual challenges for feature selection. To this end, this paper proposes a multi-objective optimization algorithm IVEA to optimize the selection of features with NFRs and FRs by considering the relations among these features. Particularly, we first propose a two-dimensional fitness function. One dimension is to optimize the NFRs without quantitative constraints. The other one is to assure the selected features satisfy the FRs, and conform to the relations among features. Second, we propose a violation-dominance principle, which guides the optimization under FRs and the relations among features. We conducted comprehensive experiments on two feature models with different sizes to evaluate IVEA with state-of-the-art multi-objective optimization algorithms, including IBEAHD, IBEAε+, NSGA-II and SPEA2. The results showed that the IVEA significantly outperforms the above baselines in the NFRs optimization. Meanwhile, our algorithm needs less time to generate a solution that meets the FRs and the constraints on NFRs and fully conforms to the feature model.
Keywords: feature selection; genetic algorithms; software product lines; IBEAε+; IBEAHD; IVEA; NFR optimization; NSGA-II;SPEA2;multiobjective optimization algorithm; nonfunctional requirements; numerical constraint; optimized feature selection; selection method; software product line; two-dimensional fitness function; violation-dominance principle; Evolutionary computation;Optimization;Portals;Security;Sociology;Software;Statistics; Feature Models; Feature Selection; Multi-objective Optimization; Non-functional requirements optimization; Software Product Line (ID#: 15-5413)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081829&isnumber=7081802

 

Zekan, B.; Shtern, M.; Tzerpos, V., "Protecting Web Applications Via Unicode Extension," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 419, 428, 2-6 March 2015. doi: 10.1109/SANER.2015.7081852
Abstract: Protecting web applications against security attacks, such as command injection, is an issue that has been attracting increasing attention as such attacks are becoming more prevalent. Taint tracking is an approach that achieves protection while offering significant maintenance benefits when implemented at the language library level. This allows the transparent re-engineering of legacy web applications without the need to modify their source code. Such an approach can be implemented at either the string or the character level.
Keywords: program debugging; security of data; software maintenance; command injection; language library level; legacy Web application; maintenance benefit; security attack; taint tracking; unicode extension; Databases; Java; Operating systems; Prototypes; Security; Servers (ID#: 15-5414)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081852&isnumber=7081802

 

Cadariu, M.; Bouwers, E.; Visser, J.; van Deursen, A., "Tracking Known Security Vulnerabilities In Proprietary Software Systems," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 516, 519, 2-6 March 2015. doi: 10.1109/SANER.2015.7081868
Abstract: Known security vulnerabilities can be introduced in software systems as a result of being dependent upon third-party components. These documented software weaknesses are “hiding in plain sight” and represent low hanging fruit for attackers. In this paper we present the Vulnerability Alert Service (VAS), a tool-based process to track known vulnerabilities in software systems throughout their life cycle. We studied its usefulness in the context of external software product quality monitoring provided by the Software Improvement Group, a software advisory company based in Amsterdam, the Netherlands. Besides empirically assessing the usefulness of the VAS, we have also leveraged it to gain insight and report on the prevalence of third-party components with known security vulnerabilities in proprietary applications.
Keywords: outsourcing; safety-critical software; software houses; software quality; Amsterdam; Netherlands; VAS usefulness assessment; documented software weaknesses; empirical analysis; external software product quality monitoring; known security vulnerability tracking; proprietary applications; proprietary software systems; software advisory company; software improvement group; software life cycle; software systems; third-party components; tool-based process; vulnerability alert service; Companies; Context; Java; Monitoring; Security; Software systems (ID#: 15-5415)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081868&isnumber=7081802

 

Kula, R.G.; German, D.M.; Ishio, T.; Inoue, K., "Trusting A Library: A Study Of The Latency To Adopt The Latest Maven Release," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 520, 524, 2-6 March 2015. doi: 10.1109/SANER.2015.7081869
Abstract: With the popularity of open source library (re)use in both industrial and open source settings, `trust' plays vital role in third-party library adoption. Trust involves the assumption of both functional and non-functional correctness. Even with the aid of dependency management build tools such as Maven and Gradle, research have still found a latency to trust the latest release of a library. In this paper, we investigate the trust of OSS libraries. Our study of 6,374 systems in Maven Super Repository suggests that 82% of systems are more trusting of adopting the latest library release to existing systems. We uncover the impact of maven on latent and trusted library adoptions.
Keywords: public domain software; security of data; software libraries; trusted computing; Maven superrepository; OSS library; open source software library; trusted library adoption; Classification algorithms; Data mining; Java; Libraries; Market research; Software systems (ID#: 15-5416)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081869&isnumber=7081802

 

Laverdiere, M.-A.; Berger, B.J.; Merloz, E., "Taint Analysis of Manual Service Compositions Using Cross-Application Call Graphs," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 585, 589, 2-6 March 2015. doi: 10.1109/SANER.2015.7081882
Abstract: We propose an extension over the traditional call graph to incorporate edges representing control flow between web services, named the Cross-Application Call Graph (CACG). We introduce a construction algorithm for applications built on the Jax-WS standard and validate its effectiveness on sample applications from Apache CXF and JBossWS. Then, we demonstrate its applicability for taint analysis over a sample application of our making. Our CACG construction algorithm accurately identifies service call targets 81.07% of the time on average. Our taint analysis obtains a F-Measure of 95.60% over a benchmark. The use of a CACG, compared to a naive approach, improves the F-Measure of a taint analysis from 66.67% to 100.00% for our sample application.
Keywords: Web services; data flow analysis; flow graphs; Apache CXF; CACG construction algorithm; F-measure; JBossWS; Jax-WS standard; Web services; control flow; cross-application call graph; manual service compositions; service call targets; taint analysis; Algorithm design and analysis; Androids; Benchmark testing; Java; Manuals; Security; Web services (ID#: 15-5417)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081882&isnumber=7081802

 

Qingtao Jiang; Xin Peng; Hai Wang; Zhenchang Xing; Wenyun Zhao, "Summarizing Evolutionary Trajectory by Grouping and Aggregating Relevant Code Changes," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 361, 370, 2-6 March 2015. doi: 10.1109/SANER.2015.7081846
Abstract: The lifecycle of a large-scale software system can undergo many releases. Each release often involves hundreds or thousands of revisions committed by many developers over time. Many code changes are made in a systematic and collaborative way. However, such systematic and collaborative code changes are often undocumented and hidden in the evolution history of a software system. It is desirable to recover commonalities and associations among dispersed code changes in the evolutionary trajectory of a software system. In this paper, we present SETGA (Summarizing Evolutionary Trajectory by Grouping and Aggregation), an approach to summarizing historical commit records as trajectory patterns by grouping and aggregating relevant code changes committed over time. SETGA extracts change operations from a series of commit records from version control systems. It then groups extracted change operations by their common properties from different dimensions such as change operation types, developers and change locations. After that, SETGA aggregates relevant change operation groups by mining various associations among them. The proposed approach has been implemented and applied to three open-source systems. The results show that SETGA can identify various types of trajectory patterns that are useful for software evolution management and quality assurance.
Keywords: public domain software; software maintenance; software quality; SETGA; evolution history; historical commit records; large-scale software system; open-source systems; relevant code changes; software evolution management; software quality assurance; summarizing evolutionary trajectory by grouping and aggregation; trajectory patterns; Data mining; History; Software systems; Systematics; Trajectory; Code Change; Evolution; Mining; Pattern; Version Control System (ID#: 15-5418)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081846&isnumber=7081802


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: CODASPY 15, San Antonio, Texas

 
SoS Logo

International Conferences: CODASPY 15, San Antonio, Texas

 

Fifth ACM Conference on Data and Application Security and Privacy (CODASPY 15) was held in San Antonio, Texas on March 2-5, 2015.  The conference offers to provide a dedicated venue for high-quality research in the data and applications arena and seeks to foster a community with the focus in cyber security. The CODASPY web page is available at: http://codaspy.org/  


 

Jonathan Dautrich, Chinya Ravishankar; “Tunably-Oblivious Memory: Generalizing ORAM to Enable Privacy-Efficiency Tradeoffs;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 313-324. Doi: 10.1145/2699026.2699097
Abstract: We consider the challenge of providing privacy-preserving access to data outsourced to an untrusted cloud provider. Even if data blocks are encrypted, access patterns may leak valuable information. Oblivious RAM (ORAM) protocols guarantee full access pattern privacy, but even the most efficient ORAMs to date require roughly L log2 N block transfers to satisfy an L-block query, for block store capacity N.  We propose a generalized form of ORAM called Tunably-Oblivious Memory (lambda-TOM) that allows a query's public access pattern to assume any of lambda possible lengths. Increasing lambda yields improved efficiency at the cost of weaker privacy guarantees. 1-TOM protocols are as secure as ORAM.  We also propose a novel, special-purpose TOM protocol called Staggered-Bin TOM (SBT), which efficiently handles large queries that are not cache-friendly. We also propose a read-only SBT variant called Multi-SBT that can satisfy such queries with only O(L + log N) block transfers in the best case, and only O(L log N) transfers in the worst case, while leaking only O(log log log N) bits of information per query. Our experiments show that for N = 2^24 blocks, Multi-SBT achieves practical bandwidth costs as low as 6X those of an unprotected protocol for large queries, while leaking at most 3 bits of information per query.
Keywords: data privacy, oblivious ram, privacy trade off (ID#: 15-5533)
URL: http://doi.acm.org/10.1145/2699026.2699097

 

Matthias Neugschwandtner, Paolo Milani Comparetti, Istvan Haller, Herbert Bos; “The BORG: Nanoprobing Binaries for Buffer Overreads;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 87-97. Doi: 10.1145/2699026.2699098
Abstract: Automated program testing tools typically try to explore, and cover, as much of a tested program as possible, while attempting to trigger and detect bugs. An alternative and complementary approach can be to first select a specific part of a program that may be subject to a specific class of bug, and then narrowly focus exploration towards program paths that could trigger such a bug.  In this work, we introduce the BORG (Buffer Over-Read Guard), a testing tool that uses static and dynamic program analysis, taint propagation and symbolic execution to detect buffer overread bugs in real-world programs. BORG works by first selecting buffer accesses that could lead to an overread and then guiding symbolic execution towards those accesses along program paths that could actually lead to an overread. BORG operates on binaries and does not require source code. To demonstrate BORG's effectiveness, we use it to detect overreads in six complex server applications and libraries, including lighttpd, FFmpeg and ClamAV.
Keywords: buffer overread, dynamic symbolic execution, out-of-bounds access, symbolic execution guidance, targeted testing (ID#: 15-5534)
URL: http://doi.acm.org/10.1145/2699026.2699098

 

Sebastian Banescu, Alexander Pretschner, Dominic Battre, Stefano Cazzulani, Robert Shield, Greg Thompson; “Software-Based Protection against ‘Changeware’;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 231-242. Doi: 10.1145/2699026.2699099
Abstract: We call changeware software that surreptitiously modifies resources of software applications, e.g., configuration files. Changeware is developed by malicious entities which gain profit if their changeware is executed by large numbers of end-users of the targeted software. Browser hijacking malware is one popular example that aims at changing web-browser settings such as the default search engine or the home page. Changeware tends to provoke end-user dissatisfaction with the target application, e.g. due to repeated failure of persisting the desired configuration. We describe a solution to counter changeware, to be employed by vendors of software targeted by changeware. It combines several protection mechanisms: white-box cryptography to hide a cryptographic key, software diversity to counter automated key retrieval attacks, and run-time process memory integrity checking to avoid illegitimate calls of the developed API.
Keywords: integrity protection, malware defense, obfuscation, software diversity, software protection, white-box cryptography (ID#: 15-5535)
URL: http://doi.acm.org/10.1145/2699026.2699099

 

Jan Henrik Ziegeldorf,  Fred Grossmann, Martin Henze, Nicolas Inden, Klaus Wehrle; “CoinParty: Secure Multi-Party Mixing of Bitcoins;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 75-86. Doi: 10.1145/2699026.2699100
Abstract: Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.
Keywords: anonymity, bitcoin, secure multi-party computation (ID#: 15-5536)
URL: http://doi.acm.org/10.1145/2699026.2699100

 

Muhammad Ihsanulhaq Sarfraz, Mohamed Nabeel, Jianneng Cao, Elisa Bertino; “DBMask: Fine-Grained Access Control on Encrypted Relational Databases;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 1-11. Doi: 10.1145/2699026.2699101
Abstract: For efficient data management and economic benefits, organizations are increasingly moving towards the paradigm of "database as a service" by which their data are managed by a database management system (DBMS) hosted in a public cloud. However, data are the most valuable asset in an organization, and inappropriate data disclosure puts the organization's business at risk. Therefore, data are usually encrypted in order to preserve their confidentiality. Past research has extensively investigated query processing on encrypted data. However, a naive encryption scheme negates the benefits provided by the use of a DBMS. In particular, past research efforts have not adequately addressed flexible cryptographically enforced access control on encrypted data at different granularity levels which is critical for data sharing among different users and applications. In this paper, we propose DBMask, a novel solution that supports fine-grained cryptographically enforced access control, including column, row and cell level access control, when evaluating SQL queries on encrypted data. Our solution does not require modifications to the database engine, and thus maximizes the reuse of the existing DBMS infrastructures. Our experiments evaluate the performance and the functionality of an encrypted database and results show that our solution is efficient and scalable to large datasets.
Keywords: attribute-based group key management, database-as-a-service, encrypted query processing (ID#: 15-5537)
URL: http://doi.acm.org/10.1145/2699026.2699101

 

Mihai Maruseac, Gabriel Ghinita; “Differentially-Private Mining of Moderately-Frequent High-Confidence Association Rules;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 13-24. Doi: 10.1145/2699026.2699102
Abstract: Association rule mining allows discovering of patterns in large data repositories, and benefits diverse application domains such as healthcare, marketing, social studies, etc. However, mining datasets that contain data about individuals may cause significant privacy breaches, and disclose sensitive information about one's health status, political orientation or alternative lifestyle. Recent research addressed the privacy threats that arise when mining sensitive data, and several techniques allow data mining with differential privacy guarantees. However, existing methods only discover rules that have very large support, i.e., occur in a large fraction of the dataset transactions (typically, more than 50%). This is a serious limitation, as numerous high-quality rules do not reach such high frequencies (e.g., rules about rare diseases, or luxury merchandise).  In this paper, we propose a method that focuses on mining high-quality association rules with moderate and low frequencies. We employ a novel technique for rule extraction that combines the exponential mechanism of differential privacy with reservoir sampling. The proposed algorithm allows us to directly mine association rules, without the need to compute noisy supports for large numbers of itemsets. We provide a privacy analysis of the proposed method, and we perform an extensive experimental evaluation which shows that our technique is able to sample low- and moderate-support rules with high precision.
Keywords: association rule mining, differential privacy (ID#: 15-5538)
URL: http://doi.acm.org/10.1145/2699026.2699102

 

Zhongwen Zhang, Peng Liu, Ji Xiang, Jiwu Jing, Lingguang Lei; ”How Your Phone Camera Can Be Used to Stealthily Spy on You: Transplantation Attacks against Android Camera Service;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 99-110. Doi: 10.1145/2699026.2699103
Abstract: Based on the observations that spy-on-user attacks by calling Android APIs will be detected out by Android API auditing, we studied the possibility of a "transplantation attack", through which a malicious app can take privacy-harming pictures to spy on users without the Android API auditing being aware of it. Usually, to take a picture, apps need to call APIs of Android Camera Service which runs in mediaserver process. Transplantation attack is to transplant the picture taking code from mediaserver process to a malicious app process, and the malicious app can call this code to take a picture in its own address space without any IPC. As a result, the API auditing can be evaded. Our experiments confirm that transplantation attack indeed exists. Also, the transplantation attack makes the spy-on-user attack much more stealthy. The evaluation result shows that nearly a half of 69 smartphones (manufactured by 8 vendors) tested let the transplantation attack discovered by us succeed. Moreover, the attack can evade 7 Antivirus detectors, and Android Device Administration which is a set of APIs that can be used to carry out mobile device management in enterprise environments. The transplantation attack inspires us to uncover a subtle design/implementation deficiency of the Android security.
Keywords: android, android camera service, spy on users, transportation attack (ID#: 15-5539)
URL: http://doi.acm.org/10.1145/2699026.2699103

 

Irfan Ahmed, Vassil Roussev, Aisha Ali Gombe; “Robust Fingerprinting for Relocatable Code;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 219-229. Doi: 10.1145/2699026.2699104
Abstract: Robust fingerprinting of executable code contained in a memory image is a prerequisite for a large number of security and forensic applications, especially in a cloud environment. Prior state of the art has focused specifically on identifying kernel versions by means of complex differential analysis of several aspects of the kernel code implementation.  In this work, we present a novel technique that can identify any relocatable code, including the kernel, based on inherent patterns present in relocation tables. We show that such patterns are very distinct and can be used to accurately and efficiently identify known executables in a memory snapshot, including remnants of prior executions. We develop a research prototype, codeid, and evaluate its efficacy on more than 50,000 sample executables containing kernels, kernel modules, applications, dynamic link libraries, and malware. The empirical results show that our method achieves almost 100% accuracy with zero false negatives.
Keywords: cloud securityn, code fingerprinting, codeid, malware detection, memory analisys, virtual machine introspection (ID#: 15-5540)
URL: http://doi.acm.org/10.1145/2699026.2699104

 

Yury Zhauniarovich, Maqsood Ahmad, Olga Gadyatskaya, Bruno Crispo, Fabio Massacci; ”StaDynA: Addressing the Problem of Dynamic Code Updates in the Security Analysis of Android Applications;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 37-48. Doi: 10.1145/2699026.2699105
Abstract: Static analysis of Android applications can be hindered by the presence of the popular dynamic code update techniques: dynamic class loading and reflection. Recent Android malware samples do actually use these mechanisms to conceal their malicious behavior from static analyzers. These techniques defuse even the most recent static analyzers that usually operate under the "closed world" assumption (the targets of reflective calls can be resolved at analysis time; only classes reachable from the class path at analysis time are used at runtime). Our proposed solution allows existing static analyzers to remove this assumption. This is achieved by combining static and dynamic analysis of applications in order to reveal the hidden/updated behavior and extend static analysis results with this information. This paper presents design, implementation and preliminary evaluation results of our solution called StaDynA.
Keywords: android, dynamic code updates, security analysis (ID#: 15-5541)
URL: http://doi.acm.org/10.1145/2699026.2699105

 

Fang Liu,  Xiaokui Shu, Danfeng Yao, Ali R. Butt; “Privacy-Preserving Scanning of Big Content for Sensitive Data Exposure with MapReduce;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 195-206. Doi: 10.1145/2699026.2699106
Abstract: The exposure of sensitive data in storage and transmission poses a serious threat to organizational and personal security. Data leak detection aims at scanning content (in storage or transmission) for exposed sensitive data. Because of the large content and data volume, such a screening algorithm needs to be scalable for a timely detection. Our solution uses the MapReduce framework for detecting exposed sensitive content, because it has the ability to arbitrarily scale and utilize public resources for the task, such as Amazon EC2. We design new MapReduce algorithms for computing collection intersection for data leak detection. Our prototype implemented with the Hadoop system achieves 225 Mbps analysis throughput with 24 nodes. Our algorithms support a useful privacy-preserving data transformation. This transformation enables the privacy-preserving technique to minimize the exposure of sensitive data during the detection. This transformation supports the secure outsourcing of the data leak detection to untrusted MapReduce and cloud providers.
Keywords: collection intersection, data leak detection, mapreduce, scalability (ID#: 15-5542)
URL: http://doi.acm.org/10.1145/2699026.2699106

 

Jason Gionta, William Enck, Peng Ning; “HideM: Protecting the Contents of Userspace Memory in the Face of Disclosure Vulnerabilities:” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 325-336. Doi: 10.1145/2699026.2699107
Abstract: Memory disclosure vulnerabilities have become a common component for enabling reliable exploitation of systems by leaking the contents of executable data. Previous research towards protecting executable data from disclosure has failed to gain popularity due to large performance penalties and required architectural changes. Other research has focused on protecting application data but fails to consider a vulnerable application that leaks its own executable data.  In this paper we present HideM, a practical system for protecting against memory disclosures in contemporary commodity systems. HideM addresses limitations in existing advanced security protections (e.g., fine-grained ASLR, CFI) wherein an adversary discloses executable data from memory, reasons about protection weaknesses, and builds corresponding exploits. HideM uses the split-TLB architecture, commonly found in CPUs, to enable fine-grained execute and read permission on memory. HideM enforces fine-grained permission based on policy generated from binary structure thus enabling protection of Commercial-Off-The-Shelf (COTS) binaries. In our evaluation of HideM, we find application overhead ranges from a 6.5% increase to a 2% reduction in runtime and observe runtime memory overhead ranging from 0.04% to 25%. HideM requires adversaries to guess ROP gadget locations making exploitation unreliable. We find adversaries have less than a 16% chance of correctly guessing a single gadget across all 28 evaluated applications. Thus, HideM is a practical system for protecting vulnerable applications which leak executable data.
Keywords: code reuse attacks, information leaks, memory disclosure exploits, memory protection, return-oriented programming (ID#: 15-5543)
URL: http://doi.acm.org/10.1145/2699026.2699107

 

Christopher S. Gates, Jing Chen, Zach Jorgensen, Ninghui Li, Robert W. Proctor, Ting Yu; “Understanding and Communicating Risk for Mobile Applications;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 49-60. Doi: 10.1145/2699026.2699108
Abstract: Mobile platforms, such as Android, warn users about the permissions an app requests and trust that the user will make the correct decision about whether or not to install the app. Unfortunately many users either ignore the warning or fail to understand the permissions and the risks they imply. As a step toward developing an indicator of risk that decomposes risk into several categories, or dimensions, we conducted two studies designed to assess the dimensions of risk deemed most important by experts and novices. In Study 1, semi-structured interviews were conducted with 19 security experts, who also performed a card sorting task in which they categorized permissions. The experts identified three major risk dimensions in the interviews (personal information privacy, monetary risk, and device availability/stability), and a forth dimension (data integrity) in the card sorting task. In Study 2, 350 typical Android users, recruited via Amazon Mechanical Turk, filled out a questionnaire in which they (a) answered questions concerning their mobile device usage, (b) rated how often they considered each of several types of information when installing apps, (c) indicated what they considered to be the biggest risk associated with installing an app on their mobile device, and (d) rated their concerns with regard to specific risk types and about apps having access to specific types of information. In general, the typical users' concerns were similar to those of the security experts. The results of the studies suggest that risk information should be organized into several risk types that can be better understood by users and that a mid-level risk summary should incorporate the dimensions of personal information privacy, monetary risk, device availability/stability risk and data integrity risk.
Keywords: android, mobile security, risk, smartphones (ID#: 15-5544)
URL: http://doi.acm.org/10.1145/2699026.2699108

 

Jing Qiu, Babak Yadegari, Brian Johannesmeyer, Saumya Debray, Xiaohong Su; “Identifying and Understanding Self-Checksumming Defenses in Software;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 207-218. Doi: 10.1145/2699026.2699109
Abstract: Software self-checksumming is widely used as an anti-tampering mechanism for protecting intellectual property and deterring piracy. This makes it important to understand the strengths and weaknesses of various approaches to self-checksumming. This paper describes a dynamic information-flow-based attack that aims to identify and understand self-checksumming behavior in software. Our approach is applicable to a wide class of self chesumming defenses and the information obtained can be used to determine how the checksumming defenses may be bypassed. Experiments using a prototype implementation of our ideas indicate that our approach can successfully identify self-checksumming behavior in (our implementations of) proposals from the research literature.
Keywords: checksum, dynamic taint analysis, tamperproofing (ID#: 15-5545)
URL: http://doi.acm.org/10.1145/2699026.2699109

 

Erman Pattuk, Murat Kantarcioglu, Huseyin Ulusoy; “BigGate: Access Control Framework for Outsourced Key-Value Stores;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 171-182. Doi: 10.1145/2699026.2699110
Abstract: Due to its scalable design, key-value stores have become the backbone of many large-scale Internet companies that need to cope with millions of transactions every day. It is also an attractive cloud outsourcing technology: driven by economical benefits, many major companies like Amazon, Google, and Microsoft provide key-value storage services to their customers. However, customers are reluctant to utilize such services due to security and privacy concerns. Outsourced sensitive key-value data (e.g., social security numbers as keys, and health reports as value) may be stolen by third-party adversaries and/or malicious insiders. Furthermore, an institution, who is utilizing key-value storage services, may naturally desire to have access control mechanisms among its departments or users, while leaking as little information as possible to the cloud provider to preserve data privacy. We believe that addressing these security and privacy concerns are crucial in further adoption of key-value storage services. In this paper, we present a novel system, BigGate, that provides secure outsourcing and efficient processing of encrypted key-value data, and enforces access control policies. We formally prove the security of our system, and by carefully implemented empirical analysis, show that the overhead induced by \sysname can be as low as 2%.
Keywords: access control, cloud computing, key-value stores, outsourcing, searchable encryption, security and privacy (ID#: 15-5546)
URL: http://doi.acm.org/10.1145/2699026.2699110

 

Syed Hussain, Asmaa Sallam, Elisa Bertino; “DetAnom: Detecting Anomalous Database Transactions by Insiders;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 25-35. Doi: 10.1145/2699026.2699111
Abstract: Database Management Systems (DBMSs) provide access control mechanisms that allow database administrators (DBA) to grant application programs access privileges to databases. However, securing the database alone is not enough, as attackers aiming at stealing data can take advantage of vulnerabilities in the privileged applications and make applications to issue malicious database queries. Therefore, even though the access control mechanism can prevent application programs from accessing the data to which the programs are not authorized, it is unable to prevent misuse of the data to which application programs are authorized for access. Hence, we need a mechanism able to detect malicious behavior resulting from previously authorized applications. In this paper, we design and implement an anomaly detection mechanism, DetAnom, that creates a profile of the application program which can succinctly represent the application's normal behavior in terms of its interaction (i.e., submission of SQL queries) with the database. For each query, the profile keeps a signature and also the corresponding constraints that the application program must satisfy to submit that query. Later in the detection phase, whenever the application issues a query, the corresponding signature and constraints are checked against the current context of the application. If there is a mismatch, the query is marked as anomalous. The main advantage of our anomaly detection mechanism is that we need neither any previous knowledge of application vulnerabilities nor any example of possible attacks to build the application profiles. As a result, our DetAnom mechanism is able to protect the data from attacks tailored to database applications such as code modification attacks, SQL injections, and also from other data-centric attacks as well. We have implemented our mechanism with a software testing technique called concolic testing and the PostgreSQL DBMS. Experimental results show that our profiling technique is close to accurate, and requires acceptable amount of time, and that the detection mechanism incurs low run-time overhead.
Keywords: anomaly detection, application profile, database, insider attacks, sql injection (ID#: 15-5547)
URL: http://doi.acm.org/10.1145/2699026.2699111

 

Khalid Bijon, Ram Krishnan, Ravi Sandhu; “Virtual Resource Orchestration Constraints in Cloud Infrastructure as a Service;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 183-194. Doi: 10.1145/2699026.2699112
Abstract: In an infrastructure as a service (IaaS) cloud, virtualized IT resources such as compute, storage and network are offered on demand by a cloud service provider (CSP) to its tenants (customers). A major problem for enterprise-scale tenants that typically obtain significant amount of resources from a CSP concerns orchestrating those resources in a secure manner. For instance, unlike configuring physical hardware, virtual resources in IaaS are configured using software, and hence prone to misconfigurations that can lead to critical security violations. Examples of such resource orchestration operations include creating virtual machines with appropriate operating system and software images depending on their purpose, creating networks, connecting virtual machines to networks, attaching a storage volume to a particular virtual machine, etc. In this paper, we propose attribute-based constraints specification and enforcement as a means to mitigate this issue. High-level constraints specified using attributes of virtual resources prevent resource orchestration operations that can lead to critical misconfigurations. Our model allows tenants to customize the attributes of their resources and specify fine-grained constraints. We further propose a constraint mining approach to automatically generate constraints once the tenants specify the attributes for virtual resources. We present our model, enforcement challenges, and its demonstration in OpenStack, the de facto open-source cloud IaaS software.
Keywords: cloud iaas, configuration policy, constraints, security policy mining (ID#: 15-5548)
URL: http://doi.acm.org/10.1145/2699026.2699112

 

Kadhim Hayawi, Alireza Mortezaei, Mahesh Tripunitara; “The Limits of the Trade-Off Between Query-Anonymity and Communication-Cost in Wireless Sensor Networks;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 337-348. Doi: 10.1145/2699026.2699113
Abstract: We address query-anonymity, the property that the destination of a client's query is indistinguishable from other potential destinations, in the context of wireless sensor networks. Prior work has established that this is an important issue, and has also pointed out that there appears to be a natural trade-off between query-anonymity and communication-cost. We explore what we call the limits of this trade-off: what is the communication-cost that is sufficient to achieve a certain query-anonymity, and what is the communication-cost that we must necessarily incur to achieve a certain query-anonymity? Towards this, we point out that two notions of query-anonymity that prior work in this context proposes are not meaningful. We propose an unconditional notion of query-anonymity that we argue has intuitive appeal. We then establish the limits of the trade-off. In particular, we show that in wireless sensor networks whose topology is a square grid and are source-routed, the necessary and sufficient communication-cost for query-anonymity asymptotically smaller than n, where n is the number of nodes in the network, is dependent on n only, and the necessary and sufficient communication-cost for query-anonymity larger than n is dependent on the desired query-anonymity only. We then generalize to topologies that are arbitrary connected undirected graphs, an exercise that involves a novel approach based on a spanning tree for the graph. We show that the diameter of the graph is the inflection point in the trade-off. We discuss extensions of our results to other settings, such as those in which routes are not necessarily shortest-paths. We also validate our analytical insights empirically, via simulations in Tossim, a de facto standard approach for wireless sensor networks. In summary, our work establishes sound and interesting theoretical results for query-anonymity in wireless sensor networks, and validates them empirically.
Keywords: query-anonymity, wireless sensor networks (ID#: 15-5549)
URL: http://doi.acm.org/10.1145/2699026.2699113

 

Zhi Xu, Sencun Zhu; “SemaDroid: A Privacy-Aware Sensor Management Framework for Smartphones,” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 61-72. Doi: 10.1145/2699026.2699114
Abstract: While mobile sensing applications are booming, the sensor management mechanisms in current smartphone operating systems are left behind -- they are incomprehensive and coarse-grained, exposing a huge attack surface for malicious or aggressive third party apps to steal user's private information through mobile sensors.  In this paper, we propose a privacy-aware sensor management framework, called SemaDroid, which extends the existing sensor management framework on Android to provide comprehensive and fine-grained access control over onboard sensors. SemaDroid allows the user to monitor the sensor usage of installed apps, and to control the disclosure of sensing information while not affecting the app's usability. Furthermore, SemaDroid supports context-aware and quality-of-sensing based access control policies. The enforcement and update of the policies are in real-time. Detailed design and implementation of SemaDroid on Android are presented to show that SemaDroid works compatible with the existing Android security framework. Demonstrations are also given to show the capability of SemaDroid on sensor management and on defeating emerging sensor-based attacks. Finally, we show the high efficiency and security of SemaDroid.
Keywords: android, phone sensing, privacy-aware, sensor management, smartphone (ID#: 15-5550)
URL: http://doi.acm.org/10.1145/2699026.2699114

 

Keith Dyer, Rakesh Verma; “On the Character of Phishing URLs: Accurate and Robust Statistical Learning Classifiers;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 111-122. Doi: 10.1145/2699026.2699115
Abstract: Phishing attacks resulted in an estimated $3.2 billion dollars worth of stolen property in 2007, and the success rate for phishing attacks is increasing each year [17]. Phishing attacks are becoming harder to detect and more elusive by using short time windows to launch attacks. In order to combat the increasing effectiveness of phishing attacks, we propose that combining statistical analysis of website URLs with machine learning techniques will give a more accurate classification of phishing URLs. Using a two-sample Kolmogorov-Smirnov test along with other features we were able to accurately classify 99.3% of our dataset, with a false positive rate of less than 0.4%. Thus, accuracy of phishing URL classification can be greatly increased through the use of these statistical measures.
Keywords: character distributions, kolmogorov-smirnov distance, kullback-leibler divergence, phishing url classification (ID#: 15-5551)
URL: http://doi.acm.org/10.1145/2699026.2699115

 

Mehmet Kuzu, Mohammad Saiful Islam, Murat Kantarcioglu; “Distributed Search over Encrypted Big Data;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 271-278. Doi: 10.1145/2699026.2699116
Abstract: Nowadays, huge amount of documents are increasingly transferred to the remote servers due to the appealing features of cloud computing. On the other hand, privacy and security of the sensitive information in untrusted cloud environment is a big concern. To alleviate such concerns, encryption of sensitive data before its transfer to the cloud has become an important risk mitigation option. Encrypted storage provides protection at the expense of a significant increase in the data management complexity. For effective management, it is critical to provide efficient selective document retrieval capability on the encrypted collection. In fact, considerable amount of searchable symmetric encryption schemes have been designed in the literature to achieve this task. However, with the emergence of big data everywhere, available approaches are insufficient to address some crucial real-world problems such as scalability. In this study, we focus on practical aspects of a secure keyword search mechanism over encrypted data. First, we propose a provably secure distributed index along with a parallelizable retrieval technique that can easily scale to big data. Second, we integrate authorization into the search scheme to limit the information leakage in multi-user setting where users are allowed to access only particular documents. Third, we offer efficient updates on the distributed secure index. In addition, we conduct extensive empirical analysis on a real dataset to illustrate the efficiency of the proposed practical techniques.
Keywords: privacy, searchable encryption, security (ID#: 15-5552)
URL: http://doi.acm.org/10.1145/2699026.2699116

 

Jonathan Dautrich, Chinya Ravishankar; “Combining ORAM with PIR to Minimize Bandwidth Costs;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 289-296. Doi: 10.1145/2699026.2699117
Abstract: Cloud computing allows customers to outsource the burden of data management and benefit from economy of scale, but privacy concerns limit its reach. Even if the stored data are encrypted, access patterns may leak valuable information. Oblivious RAM (ORAM) protocols guarantee full access pattern privacy, but even the most efficient ORAMs proposed to date incur large bandwidth costs.  We combine Private Information Retrieval (PIR) techniques with the most bandwidth-efficient existing ORAM scheme known to date (ObliviStore), to create OS+PIR, a new ORAM with bandwidth costs only half those of ObliviStore. For data block counts ranging from 2^20 to 2^30, OS+PIR achieves a total bandwidth cost of only 11X-13X blocks transferred per client block read+write, down from ObliviStore's 18X-26X. OS+PIR introduces several enhancements in addition to PIR in order to achieve its lower costs, including mechanisms for eliminating unused dummy blocks.
Keywords: data privacy, oblivious ram, private information retrieval (ID#: 15-5553)
URL: http://doi.acm.org/10.1145/2699026.2699117

 

Steven Van Acker, Daniel Hausknecht, Andrei Sabelfeld; “Password Meters and Generators on the Web: From Large-Scale Empirical Study to Getting It Right;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 253-262. Doi: 10.1145/2699026.2699118
Abstract: Web services heavily rely on passwords for user authentication. To help users chose stronger passwords, password meter and password generator facilities are becoming increasingly popular. Password meters estimate the strength of passwords provided by users. Password generators help users with generating stronger passwords. This paper turns the spotlight on the state of the art of password meters and generators on the web. Orthogonal to the large body of work on password metrics, we focus on getting password meters and generators right in the web setting. We report on the state of affairs via a large-scale empirical study of web password meters and generators. Our findings reveal pervasive trust to third-party code to have access to the passwords. We uncover three cases when this trust is abused to leak the passwords to third parties. Furthermore, we discover that often the passwords are sent out to the network, invisibly to users, and sometimes in clear. To improve the state of the art, we propose SandPass, a general web framework that allows secure and modular porting of password meter and generation modules. We demonstrate the usefulness of the framework by a reference implementation and a case study with a password meter by the Swedish Post and Telecommunication Agency.
Keywords: passwords, sandboxing, web security (ID#: 15-5554)
URL: http://doi.acm.org/10.1145/2699026.2699118

 

Mauro Conti, Luigi Mancini, Riccardo Spolaor, Nino Vincenzo Verde; “Can’t You Hear Me Knocking: Identification of User Actions On Android Apps Via Traffic Analysis;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 297-304. Doi: 10.1145/2699026.2699119
Abstract: While smartphone usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as "tracking devices". The concern is not only related to an adversary taking physical or remote control of the device, but also to what a passive adversary without the above capabilities can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this paper, we move a step forward: we investigate to which extent it is feasible to identify the specific actions that a user is doing on mobile apps, by eavesdropping their encrypted network traffic. We design a system that achieves this goal by using advanced machine learning techniques. We did a complete implementation of this system and run a thorough set of experiments, which show that it can achieve accuracy and precision higher than 95% for most of the considered actions.
Keywords: machine learning, mobile security, network traffic analysis, privacy (ID#: 15-5555)
URL: http://doi.acm.org/10.1145/2699026.2699119   

 

Mohammad Islam, Mehmet Kuzu, Murat Kantarcioglu; “A Dynamic Approach to Detect Anomalous Queries on Relational Databases;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 235-246. Doi: 10.1145/2557547.2557561
Abstract: To mitigate security concerns of outsourced databases, quite a few protocols have been proposed that outsource data in encrypted format and allow encrypted query execution on the server side. Among the more practical protocols, the "bucketization" approach facilitates query execution at the cost of reduced efficiency by allowing some false positives in the query results. Precise Query Protocols (PQPs), on the other hand, enable the server to execute queries without incurring any false positives. Even though these protocols do not reveal the underlying data, they reveal query access pattern to an adversary. In this paper, we introduce a general attack on PQPs based on access pattern disclosure in the context of secure range queries. Our empirical analysis on several real world datasets shows that the proposed attack is able to disclose significant amount of sensitive data with high accuracy provided that the attacker has reasonable amount of background knowledge. We further demonstrate that a slight variation of such an attack can also be used on imprecise protocols (e.g., bucketization) to disclose significant amount of sensitive information.
Keywords: database-as-a-service, encrypted range query, inference attack (ID#: 15-5556)
URL: http://doi.acm.org/10.1145/2557547.2557561

 

Xiaofeng Xu, Li Xiong, Jinfei Liu; “Database Fragmentation with Confidentiality Constraints: A Graph Search Approach;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 263-270. Doi: 10.1145/2699026.2699121
Abstract: Database fragmentation is a promising approach that can be used in combination with encryption to achieve secure data outsourcing which allows clients to securely outsource their data to remote untrusted server(s) while enabling query support using the outsourced data. Given a set of confidentiality constraints, it vertically partitions the database into fragments such that the set of attributes in each constraint do not appear together in any one fragment. The optimal fragmentation problem is to find a fragmentation with minimum cost for query support. In this paper, we propose an efficient graph search based approach which obtains near optimal fragmentation. We model the fragmentation search space as a graph and propose efficient search algorithms on the graph. We present static and dynamic search strategies as well as a novel level-wise graph expansion technique which dramatically reduces the search time. Extensive experiments showed that our method significantly outperforms other state-of-the-art methods.
Keywords: confidentiality constraints, fragmentation, graph search, secure data outsourcing (ID#: 15-5557)
URL: http://doi.acm.org/10.1145/2699026.2699121

 

Bo Chen, Anil Kumar Ammula, Reza Curtmola; “Towards Server-side Repair for Erasure Coding-based Distributed Storage Systems;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 281-288. Doi: 10.1145/2699026.2699122
Abstract: Erasure coding is one of the main mechanisms to add redundancy in a distributed storage system, by which a file with k data segments is encoded into a file with n coded segments such that any k coded segments can be used to recover the original k data segments. Each coded segment is stored at a storage server. Under an adversarial setting in which the storage servers can exhibit Byzantine behavior, remote data checking (RDC) can be used to ensure that the stored data remains retrievable over time. The main previous RDC scheme to offer such strong security guarantees, HAIL, has an inefficient repair procedure, which puts a high load on the data owner when repairing even one corrupt data segment. In this work, we propose RDC-EC, a novel RDC scheme for erasure code-based distributed storage systems that can function under an adversarial setting. With RDC-EC we offer a solution to an open problem posed in previous work and build the first such system that has an efficient repair phase. The main insight is that RDC-EC is able to reduce the load on the data owner during the repair phase (i.e., lower bandwidth and computation) by shifting most of the burden from the data owner to the storage servers during repair. RDC-ECis able to maintain the advantages of systematic erasure coding: optimal storage for a certain reliability level and sub-file access. We build a prototype for RDC-EC and show experimentally that RDC-EC can handle efficiently large amounts of data.
Keywords: cloud storage, erasure coding, remote data integrity checking, server-side repair (ID#: 15-5558)
URL: http://doi.acm.org/10.1145/2699026.2699122

 

Dave Tian, Kevin Butler, Patrick Mcdaniel; Padma Krishnaswamy; ”Securing ARP from the Ground Up;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 305-312. Doi: 10.1145/2699026.2699123
Abstract: The basis for all IPv4 network communication is the Address Resolution Protocol (ARP), which maps an IP address to a device's Media Access Control (MAC) identifier. ARP has long been recognized as vulnerable to spoofing and other attacks, and past proposals to secure the protocol have often involved modifying the basic protocol. This paper introduces arpsec, a secure ARP/RARP protocol suite which a) does not require protocol modification, b) enables continual verification of the identity of the tar- get (respondent) machine by introducing an address binding repository derived using a formal logic that bases additions to a host's ARP cache on a set of operational rules and properties, c) utilizes the TPM, a commodity component now present in the vast majority of modern computers, to augment the logic-prover-derived assurance when needed, with TPM-facilitated attestations of system state achieved at viably low processing cost. Using commodity TPMs as our attestation base, we show that arpsec incurs an overhead ranging from 7% to 15.4% over the standard Linux ARP implementation and provides a first step towards a formally secure and trustworthy networking stack.
Keywords: arp, logic, spoofing, trusted computing, trusted protocols (ID#: 15-5559)
URL: http://doi.acm.org/10.1145/2699026.2699123


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Cloud Engineering (IC2E), 2015 Arizona

 
SoS Logo

International Conferences: Cloud Engineering (IC2E), 2015 Arizona

 

2015 IEEE International Conference on Cloud Engineering (IC2E) was held 9-13 March 2015 in Tempe, Arizona. The conference addresses cloud computing as

“a new paradigm for the use and delivery of information technology (IT), including on-demand access, economies of scale, and dynamic sourcing options. In the cloud context, a wide range of IT resources and capabilities, including servers, networking, storage, middleware, data, security, applications, and business processes, are available as services enabled for rapid provisioning, flexible pricing, elastic scaling, and resilience. These new forms of IT services are challenging conventional wisdom and practices. Fully reaping the benefits of cloud computing calls for holistic treatment of key technical and business issues, as well as for engineering methodology that draws upon innovations from diverse areas of computer science and business informatics. “

The conference home page is available at: http://conferences.computer.org/IC2E/2015/  Articles cited here are deemed of interest to the Cyber Physical Systems Science of Security community.


 

Youngchoon Park, "Connected Smart Buildings, a New Way to Interact with Buildings," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 5, 5, 9-13 March 2015. doi: 10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers' demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart s- nsors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; HVAC; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 15-5429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092892&isnumber=7092808

 

Singh, J.; Pasquier, T.F.J.-M.; Bacon, J.; Eyers, D., "Integrating Messaging Middleware and Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 54, 59, 9-13 March 2015. doi: 10.1109/IC2E.2015.13
Abstract: Security is an ongoing challenge in cloud computing. Currently, cloud consumers have few mechanisms for managing their data within the cloud provider's infrastructure. Information Flow Control (IFC) involves attaching labels to data, to govern its flow throughout a system. We have worked on kernel-level IFC enforcement to protect data flows within a virtual machine (VM). This paper makes the case for, and demonstrates the feasibility of an IFC-enabled messaging middleware, to enforce IFC within and across applications, containers, VMs, and hosts. We detail how such middleware can integrate with local (kernel) enforcement mechanisms, and highlight the benefits of separating data management policy from application/service-logic.
Keywords: cloud computing; data protection; middleware; security of data; virtual machines; VM; application logic; cloud computing; cloud consumers; cloud provider infrastructure; data flow protection; data management policy; information flow control; kernel enforcement mechanisms; kernel-level IFC enforcement; local enforcement mechanisms; messaging middleware integration; service-logic; virtual machine; Cloud computing; Context; Kernel; Runtime; Security; Servers; Information Flow Control; cloud computing; distributed systems; middleware; policy; security (ID#: 15-5430)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092899&isnumber=7092808

 

Routray, R., "Cloud Storage Infrastructure Optimization Analytics," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 92, 92, 9-13 March 2015. doi: 10.1109/IC2E.2015.83
Abstract: Summary form only given. Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
Keywords: cloud computing; data analysis; optimisation; Analytics as a Service; OpenStack based platform; SaaS model; Software as a Service; cloud computing; cloud delivery models; cloud storage infrastructure optimization analytics; data analytical capabilities; data analytics; data planes; management metric data system; management platform system; operational enterprise data center; performance optimizations;software defined environments; value proposition; Big data; Cloud computing; Computer science;Conferences; Optimization; Software as a service; Storage management (ID#: 15-5431)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092904&isnumber=7092808

 

Strizhov, M.; Ray, I., "Substring Position Search over Encrypted Cloud Data Using Tree-Based Index," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 165, 174, 9-13 March 2015. doi: 10.1109/IC2E.2015.33
Abstract: Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.
Keywords: cloud computing; cryptography; query processing; trees (mathematics); DNA data; SSP-SSE; adaptive adversary; boolean search queries; chosen-keyword attacks; cloud data; leakage functions; multikeyword queries; security properties; single keyword queries; substring position search; substring position searchable symmetric encryption; tree-based index; Cloud computing; Encryption; Indexes; Keyword search; Probabilistic logic; cloud computing; position heap tree; searchable symmetric encryption; substring position search (ID#: 15-5432)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092914&isnumber=7092808

 

Qingji Zheng; Shouhuai Xu, "Verifiable Delegated Set Intersection Operations on Outsourced Encrypted Data," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 175, 184, 9-13 March 2015. doi: 10.1109/IC2E.2015.38
Abstract: We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced cipher texts, decrypt the cipher texts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of Verifiable Delegated Set Intersection on outsourced encrypted data (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.
Keywords: cryptographic protocols; set theory; VDSI; encrypted private data sets; intersection protocol; outsourced cipher texts; outsourced encrypted data; plaintext data sets; set intersection operation; verifiable delegated set intersection operations; Cloud computing; Encryption; Gold; Polynomials; Protocols; outsourced encrypted data; verifiable outsourced computing; verifiable set intersection (ID#: 15-5433)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092915&isnumber=7092808

 

Berger, S.; Goldman, K.; Pendarakis, D.; Safford, D.; Valdez, E.; Zohar, M., "Scalable Attestation: A Step Toward Secure and Trusted Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 185, 194, 9-13 March 2015. doi: 10.1109/IC2E.2015.32
Abstract: In this work we present Scalable Attestation, a method which combines both secure boot and trusted boot technologies, and extends them up into the host, its programs, and up into the guest's operating system and workloads, to both detect and prevent integrity attacks. Anchored in hardware, this integrity appraisal and attestation protects persistent data (files) from remote attack, even if the attack is root privileged. As an added benefit of a hardware rooted attestation, we gain a simple hardware based geolocation attestation to help enforce regulatory requirements. This design is implemented in multiple cloud test beds based on the QEMU/KVM hypervisor, Open Stack, and Open Attestation, and is shown to provide significant additional integrity protection at negligible cost.
Keywords: cloud computing; operating systems (computers);security of data; trusted computing; Open Attestation; Open Stack; QEMU/KVM hypervisor; cloud test beds; guest operating system; hardware based geolocation attestation; hardware rooted attestation; integrity attack detection; integrity attack prevention; integrity protection; regulatory requirements; scalable attestation; secure boot; secure clouds; trusted boot technologies; trusted clouds; Appraisal; Hardware; Kernel; Linux; Public key; Semiconductor device measurement; Attestation; Integrity; Security (ID#: 15-5434)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092916&isnumber=7092808

 

Kanstren, T.; Lehtonen, S.; Savola, R.; Kukkohovi, H.; Hatonen, K., "Architecture for High Confidence Cloud Security Monitoring," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 195, 200, 9-13 March 2015. doi: 10.1109/IC2E.2015.21
Abstract: Operational security assurance of a networked system requires providing constant and up-to-date evidence of its operational state. In a cloud-based environment we deploy our services as virtual guests running on external hosts. As this environment is not under our full control, we have to find ways to provide assurance that the security information provided from this environment is accurate, and our software is running in the expected environment. In this paper, we present an architecture for providing increased confidence in measurements of such cloud-based deployments. The architecture is based on a set of deployed measurement probes and trusted platform modules (TPM) across both the host infrastructure and guest virtual machines. The TPM are used to verify the integrity of the probes and measurements they provide. This allows us to ensure that the system is running in the expected environment, the monitoring probes have not been tampered with, and the integrity of measurement data provided is maintained. Overall this gives us a basis for increased confidence in the security of running parts of our system in an external cloud-based environment.
Keywords: cloud computing; security of data; virtual machines; TPM; external cloud-based environment; external hosts; guest virtual machines; high confidence cloud security monitoring; host infrastructure; measurement probes; networked system; operational security assurance; operational state; trusted platform modules; Computer architecture; Cryptography; Monitoring; Probes; Servers; Virtual machining; TPM; cloud; monitoring; secure element; security assurance (ID#: 15-5435)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092917&isnumber=7092808

 

Calyam, P.; Seetharam, S.; Homchaudhuri, B.; Kumar, M., "Resource Defragmentation Using Market-Driven Allocation in Virtual Desktop Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 246, 255, 9-13 March 2015. doi: 10.1109/IC2E.2015.37
Abstract: Similar to memory or disk fragmentation in personal computers, emerging "virtual desktop cloud" (VDC) services experience the problem of data center resource fragmentation which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal "Market-Driven Provisioning and Placement" (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and consequently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a 'Net Utility' metric, even when VD resource location constraints are imposed to meet orthogonal security objectives.
Keywords: cloud computing; computer centres; microcomputers; quality of experience; resource allocation; MDPP scheme; VD request allocation simulations; VD resource on-the-fly provisioning; VDC service providers; centralized optimization principles; cost-aware utility-maximal VD re-allocations; data center resource fragmentation; disk fragmentation; distributed optimization principles; irregular resource holes; market-driven allocation; market-driven provisioning and placement scheme; memory fragmentation; multiple data centers; net utility metric; operational costs; orthogonal security; personal computers; sub-optimal VD resource allocation; user QoE profiles; user quality of experience; virtual desktop clouds services; Bandwidth; Joints; Measurement; Optimization; Resource management; Scalability; Virtual machining (ID#: 15-5436)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092926&isnumber=7092808

 

Pasquier, T.F.J.-M.; Singh, J.; Bacon, J., "Information Flow Control for Strong Protection with Flexible Sharing in PaaS," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 279, 282, 9-13 March 2015. doi: 10.1109/IC2E.2015.64
Abstract: The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
Keywords: cloud computing; data protection; operating systems (computers); virtual machines; IFC-enabled cloud platform; OS-level; PaaS; cloud isolation mechanisms; data-centric mechanism; fine grained data control; flexible data sharing mechanism; hypervisor; information flow control; virtual machines; Cloud computing; Computers; Containers; Context; Kernel; Security (ID#: 15-5437)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092930&isnumber=7092808

 

Tawalbeh, L.; Haddad, Y.; Khamis, O.; Aldosari, F.; Benkhelifa, E., "Efficient Software-Based Mobile Cloud Computing Framework," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 317, 322, 9-13 March 2015. doi: 10.1109/IC2E.2015.48
Abstract: This paper proposes an efficient software based data possession mobile cloud computing framework. The proposed design utilizes the characteristics of two frameworks. The first one is the provable data possession design built for resource-constrained mobile devices and it uses the advantage of trusted computing technology, and the second framework is a lightweight resilient storage outsourcing design for mobile cloud computing systems. Our software based framework utilizes the strength aspects in both mentioned frameworks to gain better performance and security. The evaluation and comparison results showed that our design has better flexibility and efficiency than other related frameworks.
Keywords: cloud computing; data handling; mobile computing; outsourcing; resource constrained mobile devices; software based data possession mobile cloud computing framework; software based framework; storage outsourcing design; trusted computing technology; Cloud computing; Computational modeling; Encryption; Mobile communication; Mobile handsets; Servers; Mobile Cloud Computing; Security; Software Defined Storage; Software Defined Systems; Trusted Cloud Computing (ID#: 15-5438)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092937&isnumber=7092808

 

Slominski, A.; Muthusamy, V.; Khalaf, R., "Building a Multi-tenant Cloud Service from Legacy Code with Docker Containers," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 394, 396, 9-13 March 2015. doi: 10.1109/IC2E.2015.66
Abstract: In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.
Keywords: Java; application program interfaces; cloud computing; specification languages; Beta Workflow service; Cloudant persistence layer; HTTP REST requests;IBM Bluemix Workflow Service; Javascript code; Javascript-based domain specific language; REST API; docker containers; legacy Web application; legacy codebase; multitenant cloud service; reusable architectural pattern; Browsers; Cloud computing; Containers; Engines; Memory management; Organizations; Security (ID#: 15-5439)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092950&isnumber=7092808

 

Paul, M.; Collberg, C.; Bambauer, D., "A Possible Solution for Privacy Preserving Cloud Data Storage," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 397, 403, 9-13 March 2015. doi: 10.1109/IC2E.2015.103
Abstract: Despite the economic advantages of cloud data storage, many corporations have not yet migrated to this technology. While corporations in the financial sector cite data security as a reason, corporations in other sectors cite privacy concerns for this reluctance. In this paper, we propose a possible solution for this problem inspired by the HIPAA safe harbor methodology for data anonymization. The proposed technique involves using a hash function that uniquely identifies the data and then splitting data across multiple cloud providers. We propose that such a "Good Enough" approach to privacy-preserving cloud data storage is both technologically feasible and financially advantageous. Following this approach addresses concerns about privacy harms resulting from accidental or deliberate data spills from cloud providers. The "Good Enough" method will enable firms to move their data into the cloud without incurring privacy risks, enabling them to realize the economic advantages provided by the pay-per-use model of cloud data storage.
Keywords: cloud computing; data privacy; security of data; HIPAA safe harbor methodology; data anonymization; data security; data splitting; financial sector; good enough approach; multiple cloud providers; pay-per-use model; privacy concerns; privacy preserving cloud data storage; Cloud computing; Data privacy; Indexes; Memory; Privacy; Security; Data Privacy; Cloud; Obfuscation (ID#: 15-5440)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092951&isnumber=7092808

 

Mutkoski, S., "National Cloud Computing Principles: Guidance for Public Sector Authorities Moving to the Cloud," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 404, 409, 9-13 March 2015. doi: 10.1109/IC2E.2015.104
Abstract: Governments around the world are actively seeking to leverage the many benefits of cloud computing while also ensuring that they manage risks that deployment of the new technologies can raise. While laws and regulations related to the privacy and security of government data may already exist, many were drafted in the "pre-cloud" era and could therefore benefit from an update and revision. This paper explores some of the concepts that should be incorporated into new or amended laws that seek to guide public sector entities as they move their data and workloads to the cloud.
Keywords: cloud computing; legislation; government data; national cloud computing legislation principles; precloud era; public sector authorities; Certification; Cloud computing; Computational modeling; Data privacy; Government; Legislation; Security; Cloud Computing; Public Sector; Regulation and Legislation; Risk Management; Security (ID#: 15-5441)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092952&isnumber=7092808

 

Pasquier, T.F.J.-M.; Powles, J.E., "Expressing and Enforcing Location Requirements in the Cloud Using Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 410, 415, 9-13 March 2015. doi: 10.1109/IC2E.2015.71
Abstract: The adoption of cloud computing is increasing and its use is becoming widespread in many sectors. As cloud service provision increases, legal and regulatory issues become more significant. In particular, the international nature of cloud provision raises concerns over the location of data and the laws to which they are subject. In this paper we investigate Information Flow Control (IFC) as a possible technical solution to expressing, enforcing and demonstrating compliance of cloud computing systems with policy requirements inspired by data protection and other laws. We focus on geographic location of data, since this is the paradigmatic concern of legal/regulatory requirements on cloud computing and, to date, has not been met with robust technical solutions and verifiable data flow audit trails.
Keywords: cloud computing; data protection; geography; law; IFC; cloud computing; cloud service provision; data protection; geographic data location; information flow control; legal issues; legal/regulatory requirements; location requirement enforcement; location requirement expression; policy requirements; regulatory issues; verifiable data flow audit trails; Cloud computing; Companies; Context; Europe; Law; Security (ID#: 15-5442)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092953&isnumber=7092808

 

D'Errico, M.; Pearson, S., "Towards a Formalised Representation for the Technical Enforcement of Privacy Level Agreements," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 422, 427, 9-13 March 2015. doi: 10.1109/IC2E.2015.72Abstract: Privacy Level Agreements (PLAs) are likely to be increasingly adopted as a standardized way for cloud providers to describe their data protection practices. In this paper we propose an ontology-based model to represent the information disclosed in the agreement to turn it into a means that allows software tools to use and further process that information for different purposes, including automated service offering discovery and comparison. A specific usage of the PLA ontology is presented, showing how to link high level policies to operational policies that are then enforced and monitored. Through this established link, cloud users gain greater assurance that what is expressed in such agreements is actually being met, and thereby can take this information into account when choosing cloud service providers. Furthermore, the created link can be used to enable policy enforcement tools to add semantics to the evidence they produce; this mainly takes the form of logs that are associated with the specific policy of which execution they provide evidence. Furthermore, the use of the ontology model allows a means of enabling interoperability among tools that are in charge of the enforcement and monitoring of possible violations to the terms of the agreement.
Keywords: data protection; ontologies (artificial intelligence); open systems; software tools; PLA ontology; cloud providers; data protection practices; formalised representation; high level policies; interoperability; ontology-based model; operational policies; policy enforcement tools; privacy level agreements; software tools; technical enforcement; Data models; Data privacy; Engines; Monitoring; Ontologies; Privacy; Programmable logic arrays; privacy policy; assurance; policy enforcement; Privacy Level Agreement (ID#: 15-5443)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092955&isnumber=7092808

 

Adelyar, S.H., "Towards Secure Agile Agent-Oriented System Design," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 499, 501, 9-13 March 2015. doi: 10.1109/IC2E.2015.95
Abstract: Agile methods are criticized to be inadequate for developing secure digital services. Currently, the software research community only partially studies security for agile practices. Our more holistic approach is identifying the security challenges / benefits of agile practices that relate to the core "embrace-changes" principle. For this case-study based research, we consider eXtreme Programming (XP) for a holistic security integration into agile practices.
Keywords: object-oriented programming; security of data; software agents; software prototyping; XP; embrace-change principle; extreme programming; holistic security integration; secure agile agent-oriented system design; secure digital services; software research community; Agile software development; Cloud computing; Context; Planning; Programming; Security; Agile; Embrace-changes; Security; Challenges; Benefits (ID#: 15-5444)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092968&isnumber=7092808


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Cryptography and Security in Computing Systems, 2015, Amsterdam

 
SoS Logo

International Conferences: Cryptography and Security in Computing Systems, 2015, Amsterdam

 

The Second Workshop on Cryptography and Security in Computing Systems (CS2) was held in Amsterdam 19 January 2015. The workshop describes itself as “a venue for security and cryptography experts to interact with the computer architecture and compilers community, aiming at cross-fertilization and multi-disciplinary approaches to security in computing systems.”  Conference details are available on its web page at: http://www.cs2.deib.polimi.it/   


 

Apostolos P. Fournaris, Nicolaos Klaoudatos, Nicolas Sklavos.Educational  Christos Koulamas ; “Fault and Power Analysis Attack Resistant RNS based Edwards Curve Point Multiplication;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 43. Doi: 10.1145/2694805.2694814
Abstract: In this paper, a road-map toward Fault (FA) and Power Analysis Attack (PA) resistance is proposed that combines the Edwards Curves innate PA resistance and a base point randomization Montgomery Power Ladder point multiplication (PM) algorithm, capable of providing broad FA and PA resistance, with the Residue number system (RNS) representation for all GF(p) operations in an effort to enhance the FA-PA resistance of point multiplication algorithms and additional provide performance efficiency in terms of speed and hardware resources. The proposed methodology security is analyzed and its efficiency is verified by designing a PM hardware architecture and FPGA implementation.
Keywords:  (not provided) (ID#: 15-5445)
URL: http://doi.acm.org/10.1145/2694805.2694814

 

Mathieu Carbone, Yannick Teglia, Philippe Maurine, Gilles R. Ducharme; “Interest of MIA in Frequency Domain?;”  CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, pages 35. Doi: 10.1145/2694805.2694812
Abstract: Mutual Information Analysis (MIA) has a main advantage over Pearson's correlation Analysis (CPA): its ability in detecting any kind of leakage within traces. However, it remains rarely used and less popular than CPA; probably because of two reasons. The first one is related to the appropriate choice of hyperparameters involved in MIA, choice that determines its efficiency and genericity. The second one is surely the high computational burden associated to MIA. The interests of applying MIA in the frequency domain rather than in the time domain are discussed. It is shown that MIA running into the frequency domain is really effective and fast when combined with the use of an accurate frequency leakage model.
Keywords: (not provided) (ID#: 15-5446)
URL: http://doi.acm.org/10.1145/2694805.2694812

 

Alexander Herrmann, Marc Stöttinger; “Evaluation Tools for Multivariate Side-Channel Analysis;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 1. Doi: 10.1145/2694805.2694806
Abstract: The goal of side-channel evaluation is to estimate the vulnerability of an implementation against the most powerful attacks. In this paper, we present a closed equation for the success rate computation in a profiling-based side-channel analysis scenario. From this equation, we derive a metric that can be used for optimizing the attack scenario by finding the best set of considered points in time. Practical experiments demonstrate the advantages of this new method against other previously used feature selection algorithms.
Keywords: Feature Selection, Multivariate Side-Channel Analysis (ID#: 15-5447)
URL: http://doi.acm.org/10.1145/2694805.2694806

 

Harris E. Michail, Lenos Ioannou, Artemios G. Voyiatzis; “Pipelined SHA-3 Implementations on FPGA: Architecture and Performance Analysis;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 13. Doi: 10.1145/2694805.2694808
Abstract: Efficient and high-throughput designs of hash functions will be in great demand in the next few years, given that every IPv6 data packet is expected to be handled with some kind of security features. In this paper, pipelined implementations of the new SHA-3 hash standard on FPGAs are presented and compared aiming to map the design space and the choice of the number of pipeline stages. The proposed designs support all the four SHA-3 modes of operation. They also support processing of multiple messages each comprising multiple blocks. Designs for up to a four-stage pipeline are presented for three generations of FPGAs and the performance of the implementations is analyzed and compared in terms of the throughput/area metric.  Several pipeline designs are explored in order to determine the one that achieves the best throughput/area performance. The results indicate that the FPGA technology characteristics must also be considered when choosing an efficient pipeline depth. Our designs perform better compared to the existing literature due to the extended optimization effort on the synthesis tool and the efficient design of multi-block message processing.
Keywords: Cryptography, FPGA, Hash function, Pipeline, Security (ID#: 15-5448)
URL: http://doi.acm.org/10.1145/2694805.2694808

 

Wei He, Alexander Herrmann; “Placement Security Analysis for Side-Channel Resistant Dual-Rail Scheme in FPGA;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 39. Doi: 10.1145/2694805.2694813
Abstract: Physical implementations have significant impacts to the security level of hardware cryptography, mainly due to the fact that the bottom-layer logic fundamentals typically act as the exploitable SCA leakage sources. As a widely studied countermeasure category, dual-rail precharged logic theoretically withstands side-channel analysis by compensating the data-dependent variations between two rails. In this paper, different placement schemes, considering dual-rail framework in Xilinx FPGA, are investigated concerning silicon process variations. The presented work is based on the practical implementation of a light-weight crypto coprocessor. Stochastic Approach [9] based SNR estimation is used as a metric to quantify the measurable leakage, over a series of EM traces acquired by surface scanning over a decapsulated Virtex-5 device. Experimental results show that by employing a highly interleaved and identical dual-rail style in diagonal direction, the routing symmetry can be further optimized. This improvement results in less influence from process variation between the dual rails, which in turn yields a higher security grade in terms of signal-to-noise ratio.
Keywords: Dual-rail Precharge Logic, EM Surface Scan, FPGA, Side-Channel Analysis, Signal-to-Noise Ratio (SNR), Stochastic Approach (ID#: 15-5449)
URL: http://doi.acm.org/10.1145/2694805.2694813

 

Mohsen Toorani; “On Continuous After-the-Fact Leakage-Resilient Key Exchange;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 31. doi: 10.1145/2694805.2694811
Abstract: Recently, the Continuous After-the-Fact Leakage (CAFL) security model has been introduced for two-party authenticated key exchange (AKE) protocols. In the CAFL model, an adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated. It supports continuous leakage even when the adversary learns certain ephemeral secrets or session keys. The amount of leakage is limited per query, but there is no bound on the total leakage. A generic leakage-resilient key exchange protocol π has also been introduced that is formally proved to be secure in the CAFL model. In this paper, we comment on the CAFL model, and show that it does not capture its claimed security. We also present an attack and counterproofs for the security of protocol π which invalidates the formal security proofs of protocol π in the CAFL model.
Keywords: Cryptographic protocols, Key exchange, Leakage-resilient cryptography, Security models (ID#: 15-5450)
URL: http://doi.acm.org/10.1145/2694805.2694811

 

Rainer Plaga, Dominik Merli;  “A New Definition and Classification of Physical Unclonable Functions;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 7.  doi: 10.1145/2694805.2694807
Abstract: A new definition of "Physical Unclonable Functions" (PUFs), the first one that fully captures its intuitive idea among experts, is presented. A PUF is an information-storage system with a security mechanism that is 1. meant to impede the duplication of a precisely described storage-functionality in another, separate system and 2. remains effective against an attacker with temporary access to the whole original system.  A novel classification scheme of the security objectives and mechanisms of PUFs is proposed and its usefulness to aid future research and security evaluation is demonstrated. One class of PUF security mechanisms that prevents an attacker to apply all addresses at which secrets are stored in the information-storage system, is shown to be closely analogous to cryptographic encryption. Its development marks the dawn of a new fundamental primitive of hardware-security engineering: cryptostorage. These results firmly establish PUFs as a fundamental concept of hardware security.
Keywords: Physical Unclonable Functions (ID#: 15-5451)
URL: http://doi.acm.org/10.1145/2694805.2694807

 

Loïc Zussa, Ingrid Exurville, Jean-Max Dutertre, Jean-Baptiste Rigaud, Bruno Robisson, Assia Tria, Jessy Clédière; “Evidence of an Information Leakage Between Logically Independent Blocks;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 25.  doi: 10.1145/2694805.2694810
Abstract: In this paper we study the information leakage that may exist, due to electrical coupling, between logically independent blocks of a secure circuit as a new attack path to retrieve secret information. First, an aes-128 has been implemented on a FPGA board. Then, this AES implementation has been secured with a delay-based countermeasure against fault injection related to timing constraints violations. The countermeasure's detection threshold was supposed to be logically independent from the data handled by the cryptographic algorithm. Thus, it theoretically does not leak any information related to sensitive values. However experiments point out an existing correlation between the fault detection threshold of the countermeasure and the AES's calculations. As a result, we were able to retrieve the secret key of the AES using this correlation. Finally, different strategies were tested in order to minimize the number of triggered alarm to retrieve the secret key.
Keywords: 'DPA-like' analysis, Delay-based countermeasure, information leakage, side effects (ID#: 15-5452)
URLhttp://doi.acm.org/10.1145/2694805.2694810

 

Paulo Martins, Leonel Sousa;  “Stretching the Limits of Programmable Embedded Devices for Public-key Cryptography;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 19.  doi: 10.1145/2694805.2694809
Abstract: In this work, the efficiency of embedded devices when operating as cryptographic accelerators is assessed, exploiting both multithreading and Single Instruction Multiple Data (SIMD) parallelism. The latency of a single modular multiplication is reduced, by splitting computation across multiple cores, and the technique is applied to the Rivest-Shamir-Adleman (RSA) cryptosystem, reducing its central operation execution time by up to 2.2 times, on an ARM A15 4-core processor. Also, algorithms are proposed to simultaneously perform multiple modular multiplications. The parallel algorithms are used to enhance the RSA and Elliptic Curve (EC) cryptosystems, obtaining speedups of upto 7.2 and 3.9 on the ARM processor, respectively. Whereas the first approach is most beneficial when a single RSA exponentiation is required, the latter provides a better performance when multiple RSA exponentiations have to be computed.
Keywords: Embedded Systems, Parallel Algorithms, Public-key Cryptography, Single Instruction Multiple Data (ID#: 15-5453)
URL: http://doi.acm.org/10.1145/2694805.2694809


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 Singapore

 
SoS Logo

International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 Singapore

 

The Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) was held on 7 -9 April 2015 in Singapore.  ISSNIP is a network of researchers created in 2004 to address fundamental cross-disciplinary issues of sensor networks and Information Processing in large, complex, distributed interacting systems with direct applications in health, environment and security. It brings together distinguished Australian and international researchers from mathematics, statistics, computing, biology, electrical engineering and mechanical engineering. The program seeks to advance knowledge; deliver generic models, algorithms and implementations; develop directly end-product deployable intellectual property and create human resource for future research and employment in multiple domains. It is an Australian Research Council initiative.  The conference home page is available at: http://www.issnip.org/  Articles cited here are deemed of particular interest to the Cyber-Physical Systems Science of Security virtual organization.


 

Nigussie, Ethiopia; Xu, Teng; Potkonjak, Miodrag, "Securing Wireless Body Sensor Networks Using Bijective Function-Based Hardware Primitive," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106907
Abstract: We present a novel lightweight hardware security primitive for wireless body sensor networks (WBSNs). Security of WBSNs is crucial and the security solution must be lightweight due to resource constraints in the body senor nodes. The presented security primitive is based on digital implementation of bidirectional bijective function. The one-to-one input-output mapping of the function is realized using a network of lookup tables (LUTs). The bidirectionality of the function enables implementation of security protocols with lower overheads. The configuration of the interstage interconnection between the LUTs serves as the shared secret key. Authentication, encryption/decryption and message integrity protocols are formulated using the proposed security primitive. NIST randomness benchmark suite is applied to this security primitive and it passes all the tests. It also achieves higher throughput and requires less area than AES-CCM.
Keywords: Authentication; Encryption; Protocols; Radiation detectors; Receivers; Table lookup (ID#: 15-5419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106907&isnumber=7106892

 

Hoang Giang Do; Wee Keong Ng, "Privacy-Preserving Approach For Sharing And Processing Intrusion Alert Data," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106911
Abstract: Amplified and disrupting cyber-attacks might lead to severe security incidents with drastic consequences such as large property damage, sensitive information breach, or even disruption of the national economy. While traditional intrusion detection and prevention system might successfully detect low or moderate levels of attack, the cooperation among different organizations is necessary to defend against multi-stage and large-scale cyber-attacks. Correlating intrusion alerts from a shared database of multiple sources provides security analysts with succinct and high-level patterns of cyber-attacks - a powerful tool to combat with sophisticate attacks. However, sharing intrusion alert data raises a significant privacy concern among data holders, since publishing this information means a risk of exposing other sensitive information such as intranet topology, network services, and the security infrastructure. This paper discusses possible cryptographic approaches to tackle this issue. Organizers can encrypt their intrusion alert data to protect data confidentiality and outsource them to a shared server to reduce the cost of storage and maintenance, while, at the same time, benefit from a larger source of information for alert correlation process. Two privacy preserving alert correlation techniques are proposed under a semi-honest model. These methods are based on attribute similarity and prerequisite/consequence conditions of cyber-attacks.
Keywords: Encryption; Sensors (ID#: 15-5420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106911&isnumber=7106892

 

Silva, Ricardo; Sa Silva, Jorge; Boavida, Fernando, "A Symbiotic Resources Sharing IoT Platform In The Smart Cities Context," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106922
Abstract: Large urban areas are nowadays covered by millions of wireless devices, including not only cellular equipment carried by their inhabitants, but also several ubiquitous and pervasive platforms used to monitor and/or actuate on a variety of phenomena in the city area. Whereas the former are increasingly powerful devices equipped with advanced processors, large memory capacity, high bandwidth, and several wireless interfaces, the latter are typically resource constrained systems. Despite their differences, both kinds of systems share the same ecosystem, and therefore, it is possible to build symbiotic relationships between them. Our research aims at creating a resource-sharing platform to support such relationships, in the perspective that resource unconstrained devices can assist constrained ones, while the latter can extend the features of the former. Resource sharing between heterogeneous networks in an urban area poses several challenges, not only from a technical point of view, but also from a social perspective. In this paper we present our symbiotic resource-sharing proposal while discussing its impact on networks and citizens.
Keywords: Cities and towns; Mobile communication; Mobile handsets; Security; Symbiosis; Wireless communication; Wireless sensor networks (ID#: 15-5421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106922&isnumber=7106892

 

Alohali, Bashar Ahmed; Vassialkis, Vassilios G., "Secure And Energy-Efficient Multicast Routing In Smart Grids," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106929
Abstract: A smart grid is a power system that uses information and communication technology to operate, monitor, and control data flows between the power generating source and the end user. It aims at high efficiency, reliability, and sustainability of the electricity supply process that is provided by the utility centre and is distributed from generation stations to clients. To this end, energy-efficient multicast communication is an important requirement to serve a group of residents in a neighbourhood. However, the multicast routing introduces new challenges in terms of secure operation of the smart grid and user privacy. In this paper, after having analysed the security threats for multicast-enabled smart grids, we propose a novel multicast routing protocol that is both sufficiently secure and energy efficient. We also evaluate the performance of the proposed protocol by means of computer simulations, in terms of its energy-efficient operation.
Keywords: Authentication; Protocols; Public key; Routing; Smart meters; Multicast; Secure Routing; Smart Grid (ID#: 15-5422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106929&isnumber=7106892

 

Saleh, Mohamed; El-Meniawy, Nagwa; Sourour, Essam, "Routing-guided authentication in Wireless Sensor Networks," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on , vol., no., pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106939
Abstract: Entity authentication is a crucial security objective since it enables network nodes to verify the identity of each other. Wireless Sensor Networks (WSNs) are composed of a large number of possibly mobile nodes, which are limited in computational, storage and energy resources. These characteristics pose a challenge to entity authentication protocols and security in general. We propose an authentication protocol whose execution is integrated within routing. This is in contrast to currently proposed protocols, in which a node tries to authenticate itself to other nodes without an explicit tie to the underlying routing protocol. In our protocol, nodes discover shared keys, authenticate themselves to each other and build routing paths all in a synergistic way.
Keywords: Ad hoc networks; Cryptography; Media Access Protocol; Mobile computing; Wireless sensor networks (ID#: 15-5423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106939&isnumber=7106892

 

Bose, Tulika; Bandyopadhyay, Soma; Ukil, Arijit; Bhattacharyya, Abhijan; Pal, Arpan, "Why Not Keep Your Personal Data Secure Yet Private In IoT?: Our Lightweight Approach," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106942
Abstract: IoT (Internet of Things) systems are resource-constrained and primarily depend on sensors for contextual, physiological and behavioral information. Sensitive nature of sensor data incurs high probability of privacy breaching risk due to intended or malicious disclosure. Uncertainty about privacy cost while sharing sensitive sensor data through Internet would mostly result in overprovisioning of security mechanisms and it is detrimental for IoT scalability. In this paper, we propose a novel method of optimizing the need for IoT security enablement, which is based on the estimated privacy risk of shareable sensor data. Particularly, our scheme serves two objectives, viz. privacy risk assessment and optimizing the secure transmission based on that assessment. The challenges are, firstly, to determine the degree of privacy, and evaluate a privacy score from the fine-grained sensor data and, secondly, to preserve the privacy content through secure transfer of the data, adapted based on the measured privacy score. We further meet this objective by introducing and adapting a lightweight scheme for secure channel establishment between the sensing device and the data collection unit/ backend application embedded within CoAP (Constrained Application Protocol), a candidate IoT application protocol and using UDP as a transport. We consider smart energy management, a killer IoT application, as the use-case where smart energy meter data contains private information about the residents. Our results with real household smart meter data demonstrate the efficacy of our scheme.
Keywords: Encryption; IP networks; Optimization; Physiology; Privacy; Sensitivity; CoAP; IoT; Lightweight; Privacy; Security; Smart meter (ID#: 15-5424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106942&isnumber=7106892

 

Unger, Sebastian; Timmermann, Dirk, "Dpwsec: Devices Profile For Web Services Security," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Authentication; Authorization; Cryptography; Interoperability; Web services; Applied Cryptography; Authentication; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 15-5425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892

 

Van den Abeele, Floris; Vandewinckele, Tom; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet, "Secure Communication In IP-Based Wireless Sensor Networks Via A Trusted Gateway," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks; 6LoWPAN; CoAP; DTLS; Gateway; IP; IoT; Wireless sensor networks (ID#: 15-5426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106963&isnumber=7106892

 

Kurniawan, Agus; Kyas, Marcel, "A Trust Model-Based Bayesian Decision Theory In Large Scale Internet Of Things," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106964
Abstract: In addressing the growing problem of security of Internet of Things, we present, from a statistical decision point of view, a naval approach for trust-based access control using Bayesian decision theory. We build a trust model, TrustBayes which represents a trust level for identity management in IoT. TrustBayes model is be applied to address access control on uncertainty environment where identities are not known in advance. The model consists of EX (Experience), KN (Knowledge) and RC (Recommendation) values which is be obtained in measurement while a IoT device requests to access a resource. A decision will be taken based model parameters and be computed using Bayesian decision rules. To evaluate our  trust model, we do a statistical analysis and simulate it using OMNeT++ to investigate battery usage. The simulation result shows that the Bayesian decision theory approach for trust based access control guarantees scalability and it is energy efficient as increasing number of devices and not affecting the functioning and performance.
Keywords: Batteries; Communication system security; Scalability; Wireless communication; Wireless sensor networks; Access Control; Decision making; Decision theory; Internet of Things; Trust Management (ID#: 15-5427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106964&isnumber=7106892

 

Ozvural, Gorkem; Kurt, Gunes Karabulut, "Advanced Approaches For Wireless Sensor Network Applications And Cloud Analytics," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106979
Abstract: Although wireless sensor network applications are still at early stages of development in the industry, it is obvious that it will pervasively come true and billions of embedded microcomputers will become online for the purpose of remote sensing, actuation and sharing information. According to the estimations, there will be 50 billion connected sensors or things by the year 2020. As we are developing first to market wireless sensor-actuator network devices, we have chance to identify design parameters, define technical infrastructure and make an effort to meet scalable system requirements. In this manner, required research and development activities must involve several research directions such as massive scaling, creating information and big data, robustness, security, privacy and human-in-the-loop. In this study, wireless sensor networks and Internet of things concepts are not only investigated theoretically but also the proposed system is designed and implemented end-to-end. Low rate wireless personal area network sensor nodes with random network coding capability are used for remote sensing and actuation. Low throughput embedded IP gateway node is developed utilizing both random network coding at low rate wireless personal area network side and low overhead websocket protocol for cloud communications side. Service-oriented design pattern is proposed for wireless sensor network cloud data analytics.
Keywords: IP networks; Logic gates; Network coding; Protocols; Relays; Wireless sensor networks; Zigbee (ID#: 15-5428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106979&isnumber=7106892


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Software Testing, Verification and Validation Workshops (ICSTW), Graz, Austria

 
SoS Logo

International Conferences: Software Testing, Verification and Validation Workshops (ICSTW), Graz, Austria

 

The 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW) was held April 13-17, 2015 in Graz, Austria.  The conference focused on model-based testing, software quality, test architecture, combinatorial testing, mutation analysis, security testing and research techniques. Conference details are available at:  http://icst2015.ist.tu-graz.ac.at   These bibliographies focus on articles deemed by the editors to be of most relevance to the Science of Security.


 

Kieseberg, Peter; Fruhwirt, Peter; Schrittwieser, Sebastian; Weippl, Edgar, "Security Tests For Mobile Applications — Why Using TLS/SSL Is Not Enough," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 2, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107416
Abstract: Security testing is a fundamental aspect in many common practices in the field of software testing. Still, the used standard security protocols are typically not questioned and not further analyzed in the testing scenarios. In this work we show that due to this practice, essential potential threats are not detected throughout the testing phase and the quality assurance process. We put our focus mainly on two fundamental problems in the area of security: The definition of the correct attacker model, as well as trusting the client when applying cryptographic algorithms.
Keywords: Security; TLS/SSL; Testing (ID#: 15-5403)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107416&isnumber=7107396

 

Bozic, Josip; Garn, Bernhard; Simos, Dimitris E.; Wotawa, Franz, "Evaluation Of The IPO-Family Algorithms For Test Case Generation In Web Security Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1,10, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107436
Abstract: Security testing of web applications remains a major problem of software engineering. In order to reveal vulnerabilities, testing approaches use different strategies for detection of certain kinds of inputs that might lead to a security breach. Such approaches depend on the corresponding test case generation technique that are executed against the system under test. In this work we examine how two of the most popular algorithms for combinatorial test case generation, namely the IPOG and IPOG-F algorithms, perform in web security testing. For generating comprehensive and sophisticated testing inputs we have used input parameter modelling which includes also constraints between the different parameter values. To handle the test execution, we make use of a recently introduced methodology which is based on model-based testing. Our evaluation indicates that both algorithms generate test inputs that succeed in revealing security leaks in web applications with IPOG-F giving overall slightly better results w.r.t. the test quality of the generated inputs. In addition, using constraints during the modelling of the attack grammars results in an increase on the number of test inputs that cause security breaches. Last but not least, a detailed analysis of our evaluation results confirms that combinatorial testing is an efficient test case generation method for web security testing as the security leaks are mainly due to the interaction of a few parameters. This statement is further supported by some combinatorial coverage measurement experiments on the successful test inputs.
Keywords: Combinatorial testing; IPO-Family algorithms; attack patterns; constraints; injection attacks; model-based testing; web security testing (ID#: 15-5404)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107436&isnumber=7107396

 

Henard, Christopher; Papadakis, Mike; Le Traon, Yves, "Flattening Or Not Of The Combinatorial Interaction Testing Models?," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1,4, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107443
Abstract: Combinatorial Interaction Testing (CIT) requires the use of models that represent the interactions between the features of the system under test. In most cases, CIT models involve Boolean or integer options and constraints among them. Thus, applying CIT requires solving the involved constraints, which can be directly performed using Satisfiability Modulo Theory (SMT) solvers. An alternative practice is to flatten the CIT model into a Boolean model and use Satisfiability (SAT) solvers. However, the flattening process artificially increases the size of the employed models, raising the question of whether it is profitable or not in the CIT context. This paper investigates this question and demonstrates that flattened models, despite being much larger, are processed faster with SAT solvers than the smaller original ones with SMT solvers. These results suggest that flattening is worthwhile in the CIT context.
Keywords:  (not provided) (ID#: 15-5405)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107443&isnumber=7107396

 

Lindstrom, Birgitta; Andler, Sten F.; Offutt, Jeff; Pettersson, Paul; Sundmark, Daniel, "Mutating Aspect-Oriented Models To Test Cross-Cutting Concerns," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 10, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107456
Abstract: Aspect-oriented (AO) modeling is used to separate normal behaviors of software from specific behaviors that affect many parts of the software. These are called “cross-cutting concerns,” and include things such as interrupt events, exception handling, and security protocols. AO modeling allow developers to model the behaviors of cross-cutting concerns independently of the normal behavior. Aspect-oriented models (AOM) are then transformed into code by “weaving” the aspects (modeling the cross-cutting concerns) into all locations in the code where they are needed. Testing at this level is unnecessarily complicated because the concerns are often repeated in many locations and because the concerns are muddled with the normal code. This paper presents a method to design robustness tests at the abstract, or model, level. The models are mutated with novel operators that specifically target the features of AOM, and tests are designed to kill those mutants. The tests are then run on the implementation level to evaluate the behavior of the woven cross-cutting concerns.
Keywords: Mutation analysis; aspect-oriented modeling robustness testing (ID#: 15-5406)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107456&isnumber=7107396

 

Knorr, Konstantin; Aspinall, David, "Security Testing For Android Mhealth Apps," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 8, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107459
Abstract: Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.
Keywords:  (not provided) (ID#: 15-5407)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107459&isnumber=7107396

 

Riviere, Lionel; Bringer, Julien; Le, Thanh-Ha; Chabanne, Herve, "A Novel Simulation Approach For Fault Injection Resistance Evaluation On Smart Cards," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 8, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107460
Abstract: Physical perturbations are performed against embedded systems that can contain valuable data. Such devices and in particular smart cards are targeted because potential attackers hold them. The embedded system security must hold against intentional hardware failures that can result in software errors. In a malicious purpose, an attacker could exploit such errors to find out secret data or disrupt a transaction. Simulation techniques help to point out fault injection vulnerabilities and come at an early stage in the development process. This paper proposes a generic fault injection simulation tool that has the particularity to embed the injection mechanism into the smart card source code. By its embedded nature, the Embedded Fault Simulator (EFS) allows us to perform fault injection simulations and side-channel analyses simultaneously. It makes it possible to achieve combined attacks, multiple fault attacks and to perform backward analyses. We appraise our approach on real, modern and complex smart card systems under data and control flow fault models. We illustrate the EFS capacities by performing a practical combined attack on an Advanced Encryption Standard (AES) implementation.
Keywords: Fault injection; Physical attack; combined attack; data modification; embedded systems; fault simulation; instruction skip; side-channel attack; smart card (ID#: 15-5408)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107460&isnumber=7107396

 

Afzal, Zeeshan; Lindskog, Stefan, "Automated Testing of IDS Rules," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 2, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords:  (not provided) (ID#: 15-5409)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107461&isnumber=7107396

 

Henard, Christopher; Papadakis, Mike; Le Traon, Yves, "Flattening Or Not Of The Combinatorial Interaction Testing Models?," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 4, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107443
Abstract: Combinatorial Interaction Testing (CIT) requires the use of models that represent the interactions between the features of the system under test. In most cases, CIT models involve Boolean or integer options and constraints among them. Thus, applying CIT requires solving the involved constraints, which can be directly performed using Satisfiability Modulo Theory (SMT) solvers. An alternative practice is to flatten the CIT model into a Boolean model and use Satisfiability (SAT) solvers. However, the flattening process artificially increases the size of the employed models, raising the question of whether it is profitable or not in the CIT context. This paper investigates this question and demonstrates that flattened models, despite being much larger, are processed faster with SAT solvers than the smaller original ones with SMT solvers. These results suggest that flattening is worthwhile in the CIT context.
Keywords:  (not provided) (ID#: 15-5410)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107443&isnumber=7107396


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Workshop on Security and Privacy Analytics (IWSPA) ’15, San Antonio, Texas

 
SoS Logo

International Conferences: Workshop on Security and Privacy Analytics (IWSPA) ’15, San Antonio, Texas

 

The 2015 ACM International Workshop on Security and Privacy Analytics -- IWSPA'15 was held in conjunction with CODASPY in San Antonio, Texas on March 02 - 04, 2015.  According to the organizers, techniques from data analytics fields are being applied to security challenges and some interesting questions arise: which techniques from these fields are more appropriate for the security domain and which among those are essential knowledge for security practitioners and students. Applications of such techniques also have interesting implications on privacy. The mission of the workshop is to: create a forum for interaction between data analytics and security experts and to examine the questions mentioned above. The conference web page is available at: http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=40911&copyownerid=70160  


 

George Cybenko; “Deep Learning of Behaviors for Security;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 1-1. Doi: 10.1145/2713579.2713592
Abstract: Deep learning has generated much research and commercialization interest recently. In a way, it is the third incarnation of neural networks as pattern classifiers, using insightful algorithms and architectures that act as unsupervised auto-encoders which learn hierarchies of features in a dataset. After a short review of that work, we will discuss computational approaches for deep learning of behaviors as opposed to just static patterns. Our approach is based on structured non-negative matrix factorizations of matrices that encode observation frequencies of behaviors. Example security applications and covert channel detection and coding will be presented.
Keywords: behaviors, machines learning, security (ID#: 15-5560)
URL: http://doi.acm.org/10.1145/2713579.2713592

 

Nasir Memon; “Photo Forensics: There is More to a Picture Than Meets the Eye; “ IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 35-35. Doi: 10.1145/2713579.2713594
Abstract: Given an image or a video clip can you tell which camera it was taken from? Can you tell if it was manipulated? Given a camera or even a picture, can you find from the Internet all other pictures taken from the same camera? Forensics professionals all over the world are increasingly encountering such questions. Given the ease by which digital images can be created, altered, and manipulated with no obvious traces, digital image forensics has emerged as a research field with important implications for ensuring digital image credibility. This talk will provide an overview of recent developments in the field, focusing on three problems and list challenges and problems that still need to be addressed. First, collecting image evidence and reconstructing them from fragments, with or without missing pieces. This involves sophisticated file carving technology. Second, attributing the image to a source, be it a camera, a scanner, or a graphically generated picture. The process entails associating the image with a class of sources with common characteristics (device model) or matching the image to an individual source device, for example a specific camera. Third, attesting to the integrity of image data. This involves image forgery detection to determine whether an image has undergone modification or processing after being initially captured.
Keywords: digital forensics, image forensics (ID#: 15-5561)
URL: http://doi.acm.org/10.1145/2713579.2713594

 

Hassan Alizadeh, Samaeh Khoshrou, André Zúquete; “Application-Specific Traffic Anomaly Detection Using Universal Background Model;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 11-17. Doi: 10.1145/2713579.2713586
Abstract: This paper presents an application-specific intrusion detection framework in order to address the problem of detecting intrusions in individual applications when their traffic exhibits anomalies. The system is based on the assumption that authorized traffic analyzers have access to a trustworthy binding between network traffic and the source application responsible for it. Given traffic flows generated by individual genuine application, we exploit the GMM-UBM (Gaussian Mixture Model-Universal Background Model) method to build models for genuine applications, and thereby form our detection system. The system was evaluated on a public dataset collected from a real network. Favorable results indicate the success of the framework.
Keywords: gaussian mixture models, intrusion detection, malware, network anomaly, traffic flows, universal background model, web applications (ID#: 15-5562)
URL: http://doi.acm.org/10.1145/2713579.2713586

 

Shobhit Shakya, Jian Zhang; “Towards Better Semi-Supervised Classification of Malicious Software;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 27-33. Doi: 10.1145/2713579.2713587
Abstract: Due to the large number of malicious software (malware) and the large variety among them, automated detection and analysis using machine learning techniques have become more and more important for network and computer security. An often encountered scenario in these security applications is that training examples are scarce but unlabeled data are abundant. Semi-supervised learning where both labeled and unlabeled data are used to learn a good model quickly is a natural choice under such condition. We investigate semi-supervised classification for malware categorization. We observed that malware data have specific characteristics and that they are noisy. Off-the-shelf semi-supervised learning may not work well in this case. We proposed a semi supervised approach that addresses the problems with malware data and can provide better classification. We conducted a set of experiments to test and compare our method to others. The experimental results show that semi-supervised classification is a promising direction for malware classification. Our method achieved more than 90% accuracy when there were only a few number of training examples. The results also indicates that modifications are needed to make semi-supervised learning work with malware data. Otherwise, semi-supervised classification may perform worse than classifiers trained on only the labeled data.
Keywords: graph spectral, graph-based semi-supervised learning, machine learning, malware classification (ID#: 15-5563)
URL: http://doi.acm.org/10.1145/2713579.2713587

 

Kyle Caudle, Christer Karlsson, Larry D. Pyeatt; “Using Density Estimation to Detect Computer Intrusions;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 43-48. Doi: 10.1145/2713579.2713584
Abstract: Density estimation can be used to make sense of data collected by large scale systems. An estimate of the underlying probability density function can be used to characterize normal network operating conditions. In this paper, we present a recursive method for constructing and updating an estimate of the non-stationary high dimensional probability density function using parallel programming. Once we have characterized standard operating conditions we perform real time checks for changes. We demonstrate the effectiveness of the approach via the use of simulated data as well as data from Internet header packets.
Keywords: data streams, density estimation, parallel programming, wavelets (ID#: 15-5564)
URL: http://doi.acm.org/10.1145/2713579.2713584

 

Alaa Darabseh, Akbar Siami Namin; “Keystroke Active Authentications Based on Most Frequently Used Words;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 49-54. Doi: 10.1145/2713579.2713589
Abstract: The aim of this research is to advance the user active authentication technology using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features including key duration, flight time latency, diagraph time latency, and word total time duration are analyzed. Experiments are performed to measure the performance of each feature individually and the results from the different subsets of these features. The results of the experiments are evaluated using 28 users. The experimental results show that diagraph time offers the best performance result among all four keystroke features, followed by flight time. Furthermore, the paper introduces new feature which can be effectively used in the keystroke dynamics domain.
Keywords: authentication, biometrics, keystroke dynamics, keystroke feature, security (ID#: 15-5565)
URL: http://doi.acm.org/10.1145/2713579.2713589

 

Zhentan Feng, Shuguang Xiong, Deqiang Cao, Xiaolu Deng, Xin Wang, Yang Yang, Xiaobo Zhou, Yan Huang, Guangzhu Wu; “HRS: A Hybrid Framework for Malware Detection ;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 19-26. Doi: 10.1145/2713579.2713585
Abstract: Traditional signature-based detection methods fail to detect unknown malwares, while data mining methods for detection are proved useful to new malwares but suffer for high false positive rate. In this paper, we provide a novel hybrid framework called HRS based on the analysis for 50 millions of malware samples across 20,000 malware classes from our antivirus platform. The distribution of the samples are elaborated and a hybrid framework HRS is proposed, which consists of Hash-based, Rule-based and SVM-based models trained from different classes of malwares according to the distribution. Rule-based model is the core component of the hybrid framework. It is convenient to control false positives by adjusting the factor of a boolean expression in rule-based method, while it still has the ability to detect the unknown malwares. The SVM-based method is enhanced by examining the critical sections of the malwares, which can significantly shorten the scanning and training time. Rigorous experiments have been performed to evaluate the HRS approach based on the massive dataset and the results demonstrate that HRS achieves a true positive rate of 99.84% with an error rate of 0.17%. The HRS method has already been deployed into our security platform.
Keywords: antivirus engine, data mining, machine learning, malware class distribution, malware detection (ID#: 15-5566)
URL: http://doi.acm.org/10.1145/2713579.2713585

 

Hao Zhang, Maoyuan Sun, Danfeng (Daphne) Yao, Chris North; “Visualizing Traffic Causality for Analyzing Network Anomalies;”  IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 37-42. Doi: 10.1145/2713579.2713583
Abstract: Monitoring network traffic and detecting anomalies are essential tasks that are carried out routinely by security analysts. The sheer volume of network requests often makes it difficult to detect attacks and pinpoint their causes. We design and develop a tool to visually represent the causal relations for network requests. The traffic causality information enables one to reason about the legitimacy and normalcy of observed network events. Our tool with a special visual locality property supports different levels of visual-based querying and reasoning required for the sensemaking process on complex network data. Leveraging the domain knowledge, security analysts can use our tool to identify abnormal network activities and patterns due to attacks or stealthy malware. We conduct a user study that confirms our tool can enhance the readability and perceptibility of the dependency for host-based network traffic.
Keywords: anomaly detection, information visualization, network traffic analysis, usable security, visual locality (ID#: 15-5567)
URL: http://doi.acm.org/10.1145/2713579.2713583

 

Yang Liu, Jing Zhang, Armin Sarabi, Mingyan Liu, Manish Karir, Michael Bailey; “Predicting Cyber Security Incidents Using Feature-Based Characterization of Network-Level Malicious Activities ;”  IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 3-9. Doi: 10.1145/2713579.2713582
Abstract: This study offers a first step toward understanding the extent to which we may be able to predict cyber security incidents (which can be of one of many types) by applying machine learning techniques and using externally observed malicious activities associated with network entities, including spamming, phishing, and scanning, each of which may or may not have direct bearing on a specific attack mechanism or incident type. Our hypothesis is that when viewed collectively, malicious activities originating from a network are indicative of the general cleanness of a network and how well it is run, and that furthermore, collectively they exhibit fairly stable and thus predictive behavior over time. To test this hypothesis, we utilize two datasets in this study: (1) a collection of commonly used IP address-based/host reputation blacklists (RBLs) collected over more than a year, and (2) a set of security incident reports collected over roughly the same period. Specifically, we first aggregate the RBL data at a prefix level and then introduce a set of features that capture the dynamics of this aggregated temporal process. A comparison between the distribution of these feature values taken from the incident dataset and from the general population of prefixes shows distinct differences, suggesting their value in distinguishing between the two while also highlighting the importance of capturing dynamic behavior (second order statistics) in the malicious activities. These features are then used to train a support vector machine (SVM) for prediction. Our preliminary results show that we can achieve reasonably good prediction performance over a forecasting window of a few months.
Keywords: network reputation, network security, prediction, temporal pattern, time-series data (ID#: 15-5568)
URL: http://doi.acm.org/10.1145/2713579.2713582

 

Wenyaw Chan, George Cybenko, Murat Kantarcioglu, Ernst Leiss, Thamar Solorio, Bhavani Thuraisingham, Rakesh Verma; “Panel: Essential Data Analytics Knowledge for Cyber-security Professionals and Students;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 55-57. Doi: 10.1145/2713579.2713590
Abstract: Increasingly, techniques from data analytics fields of statistics, machine learning, data mining, and natural language processing are being employed for challenges in cyber-security and privacy. This panel examines which techniques from these fields are essential for current and future cyber-security practitioners and what are the related considerations involved in successfully solving security and privacy challenges of the future.
Keywords: curriculum, data analytics, privacy, security (ID#: 15-5569)
URL: http://doi.acm.org/10.1145/2713579.2713590


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.