Science of Security (SoS) Newsletter (2015 - Issue 8)

 

Newsletter Banner

Science of Security (SoS) Newsletter (2015 - Issue 8)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

Publications of Interest

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Science of Security (SoS) Newsletter (2015 - Issue 8)

(ID#:15-7298)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


3rd Annual Best Scientific Cybersecurity Paper Competition

 

 
SoS Logo

3rd Annual Best Scientific Cybersecurity Paper Competition

Here are the citations for the winning papers in the 3rd Annual NSA Competition for the best Scientific Paper.  Details about the review team, the authors and the awards ceremony are available on the CPS-VO web page at: http://cps-vo.org/group/sos/papercompetition#honorable

The Winning Paper:

 Alvim, M.S.; Chatzikokolakis, K.; Mciver, A.; Morgan, C.; Palamidessi, C.; Smith, G., "Additive and Multiplicative Notions of Leakage, and Their Capacities," Computer Security Foundations Symposium (CSF), 2014 IEEE 27th , vol., no., pp.308,322, 19-22 July 2014. doi: 10.1109/CSF.2014.29

Abstract: Protecting sensitive information from improper disclosure is a fundamental security goal. It is complicated, and difficult to achieve, often because of unavoidable or even unpredictable operating conditions that can lead to breaches in planned security defences. An attractive approach is to frame the goal as a quantitative problem, and then to design methods that measure system vulnerabilities in terms of the amount of information they leak. A consequence is that the precise operating conditions, and assumptions about prior knowledge, can play a crucial role in assessing the severity of any measured vulnerability. We develop this theme by concentrating on vulnerability measures that are robust in the sense of allowing general leakage bounds to be placed on a program, bounds that apply whatever its operating conditions and whatever the prior knowledge might be. In particular we propose a theory of channel capacity, generalising the Shannon capacity of information theory, that can apply both to additive- and to multiplicative forms of a recently-proposed measure known as g-leakage. Further, we explore the computational aspects of calculating these (new) capacities: one of these scenarios can be solved efficiently by expressing it as a Kantorovich distance, but another turns out to be NP-complete. We also find capacity bounds for arbitrary correlations with data not directly accessed by the channel, as in the scenario of Dalenius's Desideratum.

Keywords: channel capacity; computational complexity; cryptography; data protection; information theory; Dalenius Desideratum; Kantorovich distance; NP-complete; Shannon capacity; additive forms; additive leakage; capacity bounds; channel capacity; g-leakage; general leakage bounds; information leakage; information theory; multiplicative forms; multiplicative leakage; operating conditions; prior knowledge; quantitative problem; security defences; security goal; sensitive information protection; system vulnerabilities; vulnerability measures; vulnerability severity; Additives; Channel capacity; Databases; Educational institutions; Entropy; Joints; Robustness; Dalenius Desideratum; Quantitative information flow; channel capacity; confidentiality (ID#: 15-6607)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957119&isnumber=6957090

 

Honorable Mention:

 

Sauvik Das, Adam D.I. Kramer, Laura A. Dabbish, Jason I. Hong; “Increasing Security Sensitivity with Social Proof: A Large-Scale Experimental Confirmation;” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security,   November 2014, Pages 739-749. Doi:   10.1145/2660267.2660271

Abstract: One of the largest outstanding problems in computer security is the need for higher awareness and use of available security tools. One promising but largely unexplored approach is to use social proof: by showing people that their friends use security features, they may be more inclined to explore those features, too. To explore the efficacy of this approach, we showed 50,000 people who use Facebook one of 8 security announcements'7 variations of social proof and 1 non-social control-to increase the exploration and adoption of three security features: Login Notifications, Login Approvals, and Trusted Contacts. Our results indicated that simply showing people the number of their friends that used security features was most effective, and drove 37% more viewers to explore the promoted security features compared to the non-social announcement (thus, raising awareness). In turn, as social announcements drove more people to explore security features, more people who saw social announcements adopted those features, too. However, among those who explored the promoted features, there was no difference in the adoption rate of those who viewed a social versus a non-social announcement. In a follow up survey, we confirmed that the social announcements raised viewer's awareness of available security features.

Keywords: Facebook, persuasion, security, security feature adoption, social cybersecurity, social influence (ID#: 15-6608)

URL: http://doi.acm.org/10.1145/2660267.2660271


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News


 
SoS Logo

In the News

This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.  The articles listed here will be featured in the next publication of the Science of Security newsletter.


US News     

"Russian Attackers Hack Pentagon", InfoSecurity Magazine, 07 August 2015. [Online]. On around July 25th, the Pentagon was forced to shut down the server for its Joint Chiefs of Staff unclassified email system after an attack by Russian attackers. It is not known for sure whether or not the attack, which resulted in the leak of "large quantities of data", was authorized by the Russian Government or not. (ID#: 15-50423) See http://www.infosecurity-magazine.com/news/russian-attackers-hack-pentagon/

"Chinese Hackers May Have Burrowed Into Airlines", Tech News World, 11 August 2015. [Online]. Travel reservations processor Sabre confimed that it suffered a breach in systems containing sensitive data on as many as a billion passengers. United Airlines, which shares some network infrastructure with Sabre, is still recovering from an incident last month that was speculated to have been an attack, leading government officials to believe that China-based hackers are targeting travel infrastructure. (ID#: 15-50424) See http://www.technewsworld.com/story/82365.html

"Terracotta VPN, the Chinese VPN Service as Hacking Platform", Cyber Defense Magazine, 06 August 2015. [Online]. RSA Security reports that Chinese virtual private network provider Terracotta VPN use brute-force attacks and Trojans on vulnerable Windows servers to provide infrastructure for launching cyber attacks. Terracotta uses these compromised servers to offer a service that allows hackers to launch cyber attacks from seemingly legitimate and respected IP addresses. (ID#: 15-50425) See http://www.cyberdefensemagazine.com/terracotta-vpn-the-chinese-vpn-service-as-hacking-platform/

"Planned Parenthood reports second website hack in a week", Reuters, 30 July 2015. [Online]. Following controversy over alleged sale of illegal fetal tissue, Planned Parenthood has announced that it's websites were hit with a large DDoS attack that prompted the organization to keep its websites offline for the day. The day before, Planned Parenthood announced an attack against its information systems, possibly resulting in the compromise of employee's personal information. (ID#: 15-50422) See http://www.reuters.com/article/2015/07/30/us-usa-plannedparenthood-cyberattack-idUSKCN0Q409120150730

"Ashley Madison attack prompts spam link deluge", BBC, 31 July 2015. [Online]. Infidelity website Ashley Madison suffered a breach in which attackers claimed to have personal information from 37 million accounts, claiming that they would release it if the website was not shut down. The hackers have not yet released the data, prompting spammers to release fake links to the non-extant data. Many of these links, according to a BBC investigation, lead victims to fake data, scam pages, and malware. (ID#: 15-50411) See http://www.bbc.com/news/technology-33731183

"Windows 10 Will Use Virtualization For Extra Security", Information Week, 22 July 2015. [Online]. The highly anticipated Windows 10 operating system has many new features that are being marketed to consumers, but one over-looked advancement that doesn't appeal to the non-tech-savvy is security. Microsoft claims to have taken a fundamentally new approach to security; new features like virtualization place critical operating system components in their own containers, making them inaccessible to hackers. (ID#: 15-50417) See http://www.informationweek.com/software/operating-systems/windows-10-will-use-virtualization-for-extra-security/a/d-id/1321415

 


 

    International News 

 

 

"Laser Pointer Hack Easily Dupes Driverless Cars"Tech News World, 08 September 2015. [Online]. A security researcher discovered that the Lidar systems in self-driving cars could be compromised using a laser pointer and a basic computer. The hack would not be able to make a car crash, but it could force it to slow down or come to a complete stop.
See: http://www.technewsworld.com/story/82463.html

 
"Malware Jumps Apple's Garden Wall", Tech News World, 22 September 2015. [Online]. Chinese developers discovered that some users have unknowingly published malware-infected iOS applications on the App Store.  Developers some how downloaded an unauthorized version of Apple's IDE Xcode, since dubbed "XcodeGhost", which in turn infected their applications. Apple says that all infected applications have been removed from the store.
See: http://www.technewsworld.com/story/82521.html 

"KeyRaider Malware Busts iPhone Jailbreakers", Tech News World, 03 September 2015. [Online]. Malicious software, now being called KeyRaider, has affected a multitude of jailbroken iPhone users. The malware infiltrated the phones through the third-party app store, Cydia. Reports claim that the malware has stolen up to 225,000 active Apple accounts, certificates, and even receipts.
See: http://www.technewsworld.com/story/82450.html

"Baby Monitors Riddled with Security Holes", Tech News World, 02 September 2015. [Online]. Rapid7 released a report detailing their study of several major brands of baby monitors recently. The report stated that many top brands are littered with vulnerabilities. One top consultant for the group said that many of the security flaws would allow the video and audio from the monitors to be watched anywhere.
See: http://www.technewsworld.com/story/82449.html

"Cybersecurity bill could 'sweep away' internet users' privacy, agency warns", The Guardian, 3 August 2015. [Online]. A new revision of the Cybersecurity Information Sharing Act bill will be voted on by the Senate.  The bill allows companies with large amounts of information to share it with the appropriate government agencies, who can then share the information as they see fit. The bill has turned a lot of attention to companies such as Google and Facebook who possess large amounts of user's data and online habits. (ID#: 15-60044)
See: http://www.theguardian.com/world/2015/aug/03/cisa-homeland-security-privacy-data-internet

"Hacking Victim JPMorgan Chasing Cybersecurity Fixes", Investors, 4 August 2015. [Online]. Last year, JP Morgan Chase suffered a cyber attack that compromised the contact information of roughly 76 million customers.  Although no accounts or social security numbers were taken, the company is planning on taking measures to prevent another major attack. The bank says that theire cyber security budget will be increased from $250 million to $500 million in order to improve upon their analytics, testing and coverage.  (ID#: 15- 60043)
See: http://news.investors.com/business/080415-764935-jpmorgan-chase-to-double-cybersecurity-spending.htm

“Hackers Remotely Kill a Jeep on the Highway – With Me in it”, Wired, 21 July 2015. [Online]. Charlie Miller and Chris Valase successfully hacked in to a Jeep Cherokee from a remote computer, all while the car was being driven miles away. The two were able to take full control of nearly everything from the windshield wipers and air conditioning to the steering wheel itself. They plan on releasing some of their findings at Black Hat in Las Vegas in August. (ID#: 15-60042)
See: http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

The Dinosaurs of Cybersecurity Are Planes, Power Grids and Hospitals”, Tech Crunch, 10 July 2015. [Online]. One of the most prominent risks in cybersecurity comes in the form of infrastructure and things like airplanes and hospitals. As these systems are compromised, patches are developed to remedy the problem. However patches are slow to roll out and take a great deal of time to develop. By the time patches are complete, often, the damage has already been done. (ID#: 15-60040)
See: http://techcrunch.com/2015/07/10/the-dinosaurs-of-cybersecurity-are-planes-power-grids-and-hospitals/

Microsoft is Reportedly Planning to Buy an Israeli Cyber Security Firm for $320 Million”, Business Insider, 20 July 2015. [Online]. A new report shows that Microsoft has a deal in place to purchase the Israeli cybersecurity company, Adallom. Adallom is expected to become Microsoft’s cyber security center for the entirety of Israel. Adallom was founded in 2012 and has since grown to 80 employees. (ID#: 15-60041)
See: http://www.businessinsider.com/r-microsoft-to-buy-israeli-cyber-security-firm-adallom-report-2015-7

(ID#: 15-7299)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Security Related Conferences

 

 
SoS Logo

International Security Related Conferences

 

The following pages provide highlights on Science of Security related research presented at the following International Conferences.

(ID#: 15-7300)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
 

International Conferences: CYBCONF 2015, Poland

 

 
SoS Logo

International Conferences:

CYBCONF 2015

Poland


The 2015 IEEE 2nd International Conference on Cybernetics (CYBCONF) was held 24-26 June 2015 in Gdynia, Poland. The conference had several main tracks and special sessions, including Control Systems and Robotics, Artificial Intelligence, Knowledge-Based Systems, Machine Learning, Machine Vision, Computational Intelligence, Swarm Intelligence, Cognitive Systems, Neural Networks, Medical and Health Informatics, and Smart Applications.  


Sparrow, R.D.; Adekunle, A.A.; Berry, R.J.; Farnish, R.J., “Balancing Throughput and Latency for an Aerial Robot over a Wireless Secure Communication Link,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 184-189, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175929
Abstract: With the requirement for remote control of unmanned aerial vehicles (UAV) becoming more frequent in scenarios where the environment is inaccessible or hazardous to human beings (e.g. disaster recovery); remote functionality of a UAV is generally implemented over wireless networked control systems (WNCS). The nature of the wireless broadcast allows attackers to exploit security vulnerabilities through passive and active attacks; consequently, cryptography is often selected as a countermeasure to the aforementioned attacks. This paper analyses simulation undertaken and proposes a model to balance the relationship between throughput and latency for a secure multi-hop communication link. Results obtained indicate that throughput is more influential up to two hops from the initial transmitting device; conversely, latency is the determining factor after two hops.
Keywords: autonomous aerial vehicles; control engineering computing; cryptography; mobile communication; networked control systems; UAV; WNCS; active attacks; aerial robot; latency balancing; passive attacks; remote control; remote functionality; secure multihop communication link; security vulnerabilities; throughput balancing; unmanned aerial vehicles; wireless broadcast; wireless networked control systems; wireless secure communication link; Communication system security; Correlation; Mathematical model; Predictive models; Security; Throughput; Wireless communication; Latency; Security; Throughput; Unmanned Aerial Vehicles; Wireless (ID#: 15-6457)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175929&isnumber=7175890 

 

Abraham, S.; Nair, S., “Exploitability Analysis Using Predictive Cybersecurity Framework,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 317-323, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175953
Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.
Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; decision making; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Markov processes; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecyle Model (ID#: 15-6458)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175953&isnumber=7175890

 

Szpyrka, M.; Szczur, A.; Bazan, J.G.; Dydo, L., “Extracting of Temporal Patterns from Data for Hierarchical Classifiers Construction," in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 330-335, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175955
Abstract: A method of automatic extracting of temporal patterns from learning data for constructing hierarchical behavioral patterns based classifiers is considered in the paper. The presented approach can be used to complete the knowledge provided by experts or to discover the knowledge automatically if no expert knowledge is accessible. Formal description of temporal patterns is provided and an algorithm for automatic patterns extraction and evaluation is described. A system for packet-based network traffic anomaly detection is used to illustrate the considered ideas.
Keywords: computer network security; data mining; learning (artificial intelligence); pattern classification; temporal logic; automatic pattern extraction; data temporal pattern extraction; hierarchical behavioral pattern; hierarchical classifier construction; knowledge discovery; learning data; packet-based network traffic anomaly detection; Clustering algorithms; Data mining; Decision trees; Entropy; Petri nets; Ports (Computers); Servers; LTL logic; feature extraction; hierarchical classifiers; network anomaly detection; temporal patterns (ID#: 15-6459)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175955&isnumber=7175890

 

Hermanowski, D., “Open Source Security Information Management System Supporting IT Security Audit,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 336-341, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175956
Abstract: Nowadays, assuring security of computer systems becomes difficult due to the rapid development of IT technologies, even in household appliances. This article shows exemplary model of the IT security monitoring and management system. Proposed solution is aimed to collect security events, analyse them, assess the risk they bring and inform the administrator about them in order to take appropriate decision to mitigate potential security incident. This system is based on open source code toolset. This toolset was studied, tested and examined in the context of the whole system. These tools were configured and an additional code was developed in order to achieve synergy effect from adopting various techniques aimed at network monitoring and system security.
Keywords: auditing; information management; public domain software; security of data; IT security audit; IT security management system; IT security monitoring; IT technologies; computer systems; household appliances; network monitoring; open source code toolset; open source security information management system; security events; security incident; synergy effect; system security; Correlation; Databases; Malware; Monitoring; Ports (Computers); Servers; IT audit; OSSIM; SIEM; computer security; monitoring; open source (ID#: 15-6460)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175956&isnumber=7175890

 

Goswami, S.; Chakrabarti, A.; Chakraborty, B., “Analysis of Correlation Structure of Data Set for Efficient Pattern Classification,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 24-29, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175901
Abstract: Pattern classification or clustering plays important role in a wide variety of applications in different areas like psychology and other social sciences, biology and medical sciences, pattern recognition and data mining. A lot of algorithms for supervised or unsupervised classification have been developed so far in order to achieve high classification accuracy with lower computational cost. However, some methods or algorithms work well for some of the data sets and perform poorly on others. For any particular data set, it is difficult to find out the most suitable algorithm without some random trial and error process. It seems that the characteristics of the data set might have some influence on the algorithm for classification. In this work, the data set characteristics is studied in terms of intra attribute relationship and a measure MVS (multivariate score) has been proposed to quantify and group different data sets on the basis of the correlation structure into strong independent, weak independent, weak correlated and strong correlated data set. The performance of different feature selection algorithms on different groups of data are studied by simulation experiments with 63 publicly available bench mark data sets. It has been verified that univariate methods lead to significant performance gain for strong independent data set compared to multivariate methods while multivariate methods have better performance for strong correlated data sets.
Keywords: data analysis; feature selection; pattern classification; pattern clustering; MVS; correlation structure analysis; data set characteristics; feature selection algorithms; intra attribute relationship; multivariate methods; multivariate score; pattern classification; pattern clustering; strong correlated data set; strong independent data set; univariate methods; weak correlated data set; weak independent data set; Accuracy; Classification algorithms; Clustering algorithms; Correlation; Data models; Histograms; Iris; Pattern classification algorithm; correlation structure. (ID#: 15-6461)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175901&isnumber=7175890

 

Qiangfu Zhao, “Aware System, Aware Unit and Aware Logic,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 42-47, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175904
Abstract: In recent years, various aware systems have been developed in the context of ubiquitous computing to improve the quality of services (QoS). The ultimate goal of awareness computing (AC) is to establish a win-win relation between producers and consumers. On the other hand, the main purpose of computational awareness (CA) is to understand the mechanism of awareness in human or animal brains, so that awareness, consciousness, and even intelligence can be realized step-by-step in computing machines. In this paper, we first provide a formal definition of aware systems, and then consider a way to build interpretable aware systems based on 3-valued logic. Some primary experiments show that it is possible to realize interpretable aware systems via discretizing multilayer feedforward neural network.
Keywords: formal logic; multilayer perceptrons; quality of service; ubiquitous computing; 3-valued logic; QoS; animal brain; aware logic; aware unit; awareness computing; computational awareness; computing machine; formal definition; human brain; interpretable aware system; multilayer feedforward neural network; quality of services; ubiquitous computing; win-win relation; Context; Context modeling; Gold; Inductors; Neurons; Sensors; Training; Computational awareness; aware logic; aware system; (ID#: 15-6462)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175904&isnumber=7175890

 

Tzung-Pei Hong; Ling-I Huang; Wen-Yang Lin; Yu-Yang Liu; Chakraborty, G., “Dynamic Migration in Multiple Ant Colonies,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, pp. 146-150, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175922
Abstract: Multi-population-based bio-inspired computation may use migration among groups to increase the search diversity. Through good solutions exchanged among sub-populations, better solutions may be found with a high probability. In this paper, we propose two algorithms to dynamically adjust the two primary parameters, migration interval and migration rate, for flexibly reflect solution situation for effective migration. The first algorithm only dynamically changes the migration interval, and the second considers both interval and rate. We will examine how the dynamic migration strategies affect the quality of solutions in the experiments.
Keywords: ant colony optimisation; search problems; dynamic migration strategies; migration interval; migration rate; multiple ant colonies; multipopulation-based bioinspired computation; search diversity; solution situation; Ant colony optimization; Computer science; Genetic algorithms; Heuristic algorithms; Particle swarm optimization; Sociology; Statistics; Ant Colony System; Bio-Inspired Computation; Dynamic Migration; Multiple Population (ID#: 15-6463)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175922&isnumber=7175890

 

Anh Duc Dang; Horn, J., “Formation Control of Autonomous Robots Following Desired Formation During Tracking a Moving Target,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 160-165, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175925
Abstract: In this paper, we propose a novel method for control the formation of the autonomous robots following to the desired formations during tracking a moving target under the influence of the dynamic environment. The V-shape formation is used to track a moving target when the distance from this formation to the target is longer than the target approaching radius. Furthermore, when the leader moves in the target approaching range, the circling shape formation is used to encircle the target. The motion of the robots to the optimal positions in the desired formations are controlled by the artificial force fields, which consist of local and global potential fields around the virtual nodes in the desired formations. Using the global attractive force field around the target, the formation of robots is always driven towards the target position. Moreover, using the repulsive/rotational vector fields in the obstacle avoiding controller, robots can easily escape the obstacle without collisions. The success of the proposed method is verified in simulations.
Keywords: collision avoidance; mobile robots; motion control; multi-robot systems; optimal control; target tracking; V-shape formation; artificial force fields; autonomous robots; circling shape formation; dynamic environment; formation control; global attractive force field; global potential fields; local potential fields;moving target tracking; obstacle avoiding controller; optimal positions; repulsive vector fields; robots motion; rotational vector fields; swarm intelligence; virtual nodes; Collision avoidance; Dynamics; Force; Robot kinematics; Target tracking; Formation control; artificial vector fields; collision avoidance; swarm intelligence (ID#: 15-6464)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175925&isnumber=7175890

 

Kempa, W.M., “Study on Time-Dependent Departure Process in a Finite-Buffer Queueing Model with BMAP-Type Input Stream,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 245-250, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175940
Abstract: Transient departure process of outgoing packets in a finite-buffer queueing model with the BMAP-type input stream and generally distributed processing times is investigated. Applying the paradigm of embedded Markov chain and the total probability law, a system of integral equations for the distribution function of the number of packets successfully processed up to fixed time t; conditioned by the initial level of buffer saturation and the state of the underlying Markov chain, is obtained. The solution of the corresponding system written for the mixed double transforms is found in a compact form by utilizing the approach based on linear and matrix algebra. Remarks on numerical treatment of analytical results and computational example are attached as well.
Keywords: Markov processes; matrix algebra; probability; queueing theory; BMAP-type input stream; buffer saturation; distributed processing times; distribution function; embedded Markov chain; finite-buffer queueing model; linear algebra; matrix algebra; time-dependent departure process; total probability law; Integral equations; Markov processes; Mathematical model; Matrices; Probability distribution; Transforms; Transient analysis; BMAP-type arrival stream; departure process; finite buffer; queueing system; transient analysis (ID#: 15-6465)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175940&isnumber=7175890

 

Hadorn, B.; Courant, M.; Hirsbrunner, B., “Holistic Integration of Enactive Entities into Cyber Physical Systems,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 281-286, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175947
Abstract: Cyber physical systems (CPSs) are built of physical components that are integrated into the cyber (virtual) world of computing. Whereas there are many open questions and challenges, such as time modeling, interaction between cyber and physical components, our research focuses on how humans can be holistically integrated. Our vision is to link human intelligence with CPS in order to get a smart partner for daily human activities. This will bring new system characteristics enabling to cope with self-awareness, cognition and creativity as well as the co-evolution of human-machine-symbiosis. In this sense, we state that drawing borders between virtual and physical or between users and technical artifacts is misleading. In contrast to that, we aim to treat the system as a whole. To achieve this, the paper presents a generic coordination model based on third-order cybernetics. In particular, the holistic integration of humans and other living systems into CPSs is presented, which leads toward human-centered CPSs.
Keywords: human computer interaction; cyber physical systems; enactive entities; generic coordination model; holistic integration; human-centered CPS; living systems; third-order cybernetics; Collaboration; Complexity theory; Cybernetics; Electronic mail; Informatics; Joining processes; Organizations; Coordination model; cybernetics; enactive entities; holistic integration; human-centered cyber physical system (ID#: 15-6466)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175947&isnumber=7175890

 

Suchacka, G.; Sobkow, M., “Detection of Internet Robots Using a Bayesian Approach,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp.365-370, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175961
Abstract: A large part of Web traffic on e-commerce sites is generated not by human users but by Internet robots: search engine crawlers, shopping bots, hacking bots, etc. In practice, not all robots, especially the malicious ones, disclose their identities to a Web server and thus there is a need to develop methods for their detection and identification. This paper proposes the application of a Bayesian approach to robot detection based on characteristics of user sessions. The method is applied to the Web traffic from a real e-commerce site. Results show that the classification model based on the cluster analysis with the Ward's method and the weighted Euclidean metric is very effective in robot detection, even obtaining accuracy of above 90%.
Keywords: Bayes methods; Internet; Web sites; electronic commerce; invasive software; pattern classification; pattern clustering; telecommunication traffic; Bayesian approach; Internet robots detection; Internet robots identification; Ward method; Web server; Web traffic; classification model; cluster analysis; e-commerce sites; hacking bots; malicious robots; search engine crawlers; shopping bots; user sessions characteristics; weighted Euclidean metric; Bayes methods; Correlation; Euclidean distance; Internet; Robots; Testing; Bayesian approach; Bayesian statistics; Internet robot; Matlab; Web bot; Web mining; Web robot detection; Web server; Web traffic; cluster analysis; correlation analysis; data mining; e-commerce; log file analysis (ID#: 15-6467)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175961&isnumber=7175890

 

Jianjia Pan; Xianwei Zheng; Lina Yang; Yulong Wang; Haoliang Yuan; Yuan Yan Tang, “A Forecasting Method Based on Extrema Mean Empirical Mode Decomposition and Wavelet Neural Network,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 377-381, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175963
Abstract: Time series forecasting is a widely and important research area in signal processing and machine learning. With the development of the artificial intelligence (AI), more and more AI technologies are used in time series forecasting. Multi-layer network structure has been widely used for forecasting problems. In this paper, based on a data-driven and adaptive method, extrema mean empirical mode decomposition, we proposed a decomposition-forecasting-ensemble approach to time series forecasting. Experimental result shows the prediction result by proposed models are better than original signal and EMD based models.
Keywords: forecasting theory; learning (artificial intelligence); signal processing; time series; wavelet neural nets; AI technology; EMD based model; adaptive method; artificial intelligence; data-driven; decomposition-forecasting-ensemble approach; extrema mean empirical mode decomposition; forecasting method; forecasting problem; machine learning; multilayer network structure; signal processing; time series forecasting; wavelet neural network; Empirical mode decomposition; Forecasting; Indexes; Market research; Neural networks; Predictive models; Time series analysis; empirical mode decomposition; forecasting; wavelet neural network (ID#: 15-6468)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175963&isnumber=7175890

 

Czarnul, P.; Rosciszewski, P.; Matuszek, M.; Szymanski, J., “Simulation of Parallel Similarity Measure Computations for Large Data Sets,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 472-477, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175980
Abstract: The paper presents our approach to implementation of similarity measure for big data analysis in a parallel environment. We describe the algorithm for parallelisation of the computations. We provide results from a real MPI application for computations of similarity measures as well as results achieved with our simulation software. The simulation environment allows us to model parallel systems of various sizes with various components such as CPUs, GPUs, network interconnects, and model parallel applications in a meta language. The simulations allow us to determine in details how computations will be performed on a particular hardware. They also allow to predict the shapes of time curves beyond the area where empirical results can be obtained due to limited computational resources such as memory capacity.
Keywords: Big Data; data analysis; digital simulation; message passing; parallel processing; Big Data analysis; MPI application; parallel similarity measure; parallelisation algorithm; simulation software; Algorithm design and analysis; Big data; Clustering algorithms; Computational modeling; Data models; Hardware; big data analysis; distance based categorisation; simulation of parallelization. (ID#: 15-6469)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175980&isnumber=7175890

 

Kasprzak, W.; Stefanczyk, M.; Wilkowski, A., “Printed Steganography Applied for the Authentication of Identity Photos in Face Verification,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp.512-517, 24-26 June 2015. doi:10.1109/CYBConf.2015.7175987
Abstract: Steganography methods are proposed for the authentication of the holder's photo in an ICAO-consistent (travel) document. The embedded message is heavily influenced by the print-scan process, as the electronic image is first printed to be included into the document (or identity card) and is scanned next to constitute the reference template in an automatic face verification procedure. Two sufficiently robust steganography methods are designed, modifications of the “Fujitsu method” and the “triangle net” method. A third method, a commercial Digimarc tool is also applied. The methods are tested w.r.t. to face image authentication ability in a face verification procedure, using two commercial biometric SDK-s. Test results demonstrate the feasibility in biometric verification and high authentication quality of proposed approach.
Keywords: biometrics (access control); face recognition; steganography; Digimarc tool; Fujitsu method; ICAO-consistent travel document; biometric SDK-s; biometric verification; electronic image; face image authentication; face verification; identity photo authentication; print-scan process; printed steganography; triangle net method; Authentication; Biomedical imaging; Correlation; Distortion; Face; Testing; Watermarking; face biometrics; image authentication; printed steganography. (ID#: 15-6470)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175987&isnumber=7175890
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: Chinese Control and Decision Conference (CCDC), Qingdao, China, 2015

 

 
SoS Logo

International Conferences:

Chinese Control and Decision Conference (CCDC) 

Qingdao, China, 2015 


The 27th Chinese Control and Decision Conference (CCDC) was held in Qingdao, China on 23-25 May 2015. This is a very large conference focused on trends in control, decision, automation, robotics, and emerging technologies. More than 1200 papers were selected for presentation. The ones cited here are relevant to the Science of Security. They have implications for cyber-physical systems, resilience, and compositionality.


Lin Pan; Voos, H.; Yumei Li; Darouach, M.; Shujun Hu, “Uncertainty Quantification of Exponential Synchronization for a Novel Class of Complex Dynamical Networks with Hybrid TVD Using PIPC,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 125-130, 23-25 May 2015. doi:10.1109/CCDC.2015.7161678
Abstract: This paper investigates the Uncertainty Quantification (UQ) of Exponential Synchronization (ES) problems for a new class of Complex Dynamical Networks (CDNs) with hybrid Time-Varying Delay (TVD) and Non-Time-Varying Delay (NTVD) nodes by using coupling Periodically Intermittent Pinning Control (PIPC) which has three switched intervals in every period. Based on Kronecker product rules, Lyapunov Stability Theory (LST), Cumulative Distribution Function (CDF), and PIPC method, the robustness of the control algorithm with respect to the value of the final time is studied. Moreover, we assume a normal distribution for the time and used the Stochastic Collocation (SC) method [1] with different values of nodes and collocation points to quantify the sensitivity. For different numbers of nodes, the results show that the ES errors converge to zero with a high probability. Finally, to verify the effectiveness of our theoretical results, Nearest-Neighbor Network (NNN) and Barabási-Albert Network (BAN) consisting of coupled non-delayed and delay Chen oscillators are studied and to demonstrate that the accuracies of the ES and PIPC are robust to variations of time.
Keywords: Lyapunov methods; complex networks; convergence; delays; large-scale systems; normal distribution; periodic control; robust control; stochastic processes; switching systems (control); synchronisation; BAN; Barabási-Albert Network; CDF; CDN; Kronecker product rule; LST; Lyapunov stability theory; NNN; NTVD node; PIPC method; collocation points; complex dynamical network; control algorithm robustness; cumulative distribution function; delay Chen oscillator; error convergence; exponential synchronization problem; hybrid TVD; hybrid time-varying delay; nearest-neighbor network; nondelayed Chen oscillator; nontime-varying delay; normal distribution; periodically intermittent pinning control; probability; sensitivity quantification; stochastic collocation method; switched interval; time variation; uncertainty quantification; Artificial neural networks; Chaos; Couplings; Delays; Switches; Synchronization; Complex Dynamical Networks (CDNs); Exponential Synchronization (ES); Periodically Intermittent Pinning Control (PIPC);Time-varying Delay (TVD) (ID#: 15-7148)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161678&isnumber=7161655

 

Bin Liu; Feng Liu; Shengwei Mei, “Modeling and Analysis of Stochastic AC-OPF Based on SDP Relaxation Technique,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 5471-5475, 23-25 May 2015. doi:10.1109/CCDC.2015.7161772
Abstract: Optimal power flow (OPF) is the foundation for many power system optimization problems, of which the modeling and solution methodology has always been a hot topic in this research area. Recently, convex relaxation technique to solve AC constrained OPF (AC-OPF) has attracted wide attention as its ability to find global optima and polynomial-time computation complexity. However, existing models in this research area are mostly formulated as deterministic problem without considering wind power generation uncertainty which has brought great challenges to power systems' operation, especially scheduling. Based on the semidefinite (SDP) relaxation technique to solve AC-OPF problem, we built a stochastic AC-OPF model and proposed its solution methodology to cope with wind power generation uncertainty in this paper. The case study based on the modified IEEE 14 bus system showed the proposed method's rationality and effectiveness to improve the system's security, reliability and capability to integrate wind power generation.
Keywords: IEEE standards; computational complexity; convex programming; load flow; power generation reliability; power generation scheduling; power system security; stochastic programming; wind power; AC constrained OPF; IEEE 14 bus system security improvement; SDP relaxation technique; convex relaxation technique; optimal power flow; polynomial-time computation complexity; power generation scheduling; power generation uncertainty; power system optimization problem; power system reliability; semidefinite relaxation technique; stochastic AC-OPF analysis; wind power generation uncertainty; Generators; Load flow; Reactive power; Stochastic processes; Uncertainty; Wind power generation; AC constrained; OPF; SDP optimization; stochastic optimization; uncertainty (ID#: 15-7149)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161772&isnumber=7161655

 

Yumei Li; Voos, H.; Lin Pan; Darouach, M.; Changchun Hua, “Stochastic Cyber-Attacks Estimation for Nonlinear Control Systems Based on Robust H∞ Filtering Technique,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 5590-5595, 23-25 May 2015. doi:10.1109/CCDC.2015.7161795
Abstract: Based on robust H∞ filtering technique, this paper presents the cyber-attacks estimation problem for nonlinear control systems under stochastic cyber-attacks and disturbances. A nonlinear H∞ filter that maximize the sensitivity of the cyber-attacks and minimize the effect of the disturbances is designed. The nonlinear filter is required to be robust to the disturbances and the residual need to remain the sensitivity of the attacks as much as possible. Applying linear matrix inequality (LMI), the sufficient conditions guaranteeing the H∞ filtering performance are obtained. Simulation results demonstrate that the designed nonlinear filter efficiently solve the robust estimation problem of the stochastic cyber-attacks.
Keywords: H∞ filters; estimation theory; linear matrix inequalities; nonlinear control systems; nonlinear filters; robust control; security of data; stochastic processes; LMI; linear matrix inequality; nonlinear control system; nonlinear filter design; robust H∞ filtering technique; stochastic cyber-attack estimation; Actuators; Estimation; Noise; Robustness; Sensitivity; Stochastic processes; H∞ filter; stochastic cyber-attacks; stochastic nonlinear system (ID#: 15-7150)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161795&isnumber=7161655

 

Guibin Lei; Shuqing Wang; Wenfang Wang; Canping Li, “Robot Monitoring System of Ocean Remote Sensing Satellite Receiving Station,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 5757-5761, 23-25 May 2015. doi:10.1109/CCDC.2015.7161832
Abstract: Security is a basic need of system and it is one of core technology of remote controlled system. Robot monitoring system of ocean remote sensing satellite receiving station includes robot, cloud computing system and remote terminals. Robot acquires real-time image of controlled system and operates it; cloud computing system build visual decision subsystem to identify the target using wavelet transform algorithm, neural network algorithm and knowledge database of features video of specific environmental; using remote terminal administrator observes the controlled system through its scene simulator and control robot to operate it remotely. Using technology of the pseudo-random number password, technology of mutual authentication to prevent cloning site, technology of conversion between the image of controlled system and its status code and technology of conversion between operation codes and operation instructions, the security strength of the robot monitoring system is improved greatly.
Keywords: cloud computing; computerised monitoring; control engineering computing; geophysical image processing; neural nets; oceanographic techniques; remote sensing; robot vision; security of data; telerobotics; wavelet transforms; cloud computing system; knowledge database; mutual authentication; neural network algorithm; ocean remote sensing satellite receiving station; pseudorandom number password; real-time image; remote controlled system; robot monitoring system; security; visual decision subsystem; wavelet transform algorithm; Control systems; Feature extraction; Monitoring; Remote sensing; Robot sensing systems; Satellites; monitoring system; robot; security strength; visual decision subsystem (ID#: 15-7151)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161832&isnumber=7161655

 

Shao-Ting Ge; Zhimin Liu; Aiying Mao; Lijuan Kang; Chunhua He, “Mathematical Model of Discrete Logic Bomb with Time-Delay in the Computer Networks,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 705-710, 23-25 May 2015. doi:10.1109/CCDC.2015.7162011
Abstract: In order to describe the dynamic characteristic of logic bomb virus in computer networks, the mathematical model of discrete logic bomb viruses is established. Firstly the disease-free equilibrium and the disease equilibrium are derived from the mathematical model. Then the asymptotic stability of the disease-free equilibrium is proved. And then the asymptotically stable conditions of the disease equilibrium is given by using the disc theorem. And the stable conditions are effective.
Keywords: computer network security; computer viruses; delays; asymptotically stable conditions; computer networks; disc theorem; discrete logic bomb viruses; disease equilibrium; disease-free equilibrium; mathematical model; time-delay; Analytical models; Asymptotic stability; Computational modeling; Computers; Diseases; Mathematical model; Weapons; discrete systems; logic bomb virus; stability (ID#: 15-7152)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162011&isnumber=7161655

 

Jianzhi Liu; Cailian Chen; Shichao Mi; Xinping Guan, “Secure Distributed Estimation of Radio Environment Map in Hierarchical Wireless Cognitive Radio Networks,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 1476-1481, 23-25 May 2015. doi:10.1109/CCDC.2015.7162152
Abstract: Radio Environment Map (REM) is a map which indicates the radio signal strength (RSS) over a geographical region. With the help of REM, Cognitive Radio (CR) users can opportunistically access the licensed spectrum. Distributed cooperative REM estimation is vulnerable to malicious sensors that submits false sensing reports. In this paper, we develop a secure distributed scheme to estimate the REM in hierarchical wireless CR networks. We formulate the estimation process as a LS problem with two ii-norm constraints using the basis pursuit approach. Reputation factors are introduced to further improve the estimation accuracy. Our scheme enables joint valid estimation result and malicious sensor identification. The performance of the proposed scheme is confirmed by extensive simulation studies.
Keywords: cognitive radio; signal processing; telecommunication security; CR users; REM; RSS; distributed cooperative REM estimation; estimation accuracy; geographical region; hierarchical wireless CR networks; hierarchical wireless cognitive radio networks; joint valid estimation; licensed spectrum; malicious sensor identification; malicious sensors; radio environment map; radio signal strength; reputation factors; secure distributed estimation; Conferences; Basis pursuit; Cognitive radio; Secure distributed estimation (ID#: 15-7153)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162152&isnumber=7161655

 

Linbo Tao; Jianjing Shen; Peng Hu; Zhenyu Zhou, “Researches on Process Algebra Based Rootkits-Immune Mechanism,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 2730-2735, 23-25 May 2015. doi:10.1109/CCDC.2015.7162393
Abstract: We present a novel mechanism for detecting unknown rootkits and immunizing known rootkit for the purposes of protecting the computer from being infected by rootkits. Inspired by the immune system of human beings, our mechanism adopts the humoral immunity mechanism to detect and defense tough rootkits. First, the features of the processes are analyzed, the known rootkit features are extracted, and the process algebra are applied to formally represent object such as the self-antigens, pathogene, antibody, etc. Then, the known rootkit are used to train to generate relevant antibody which can recognize antigens of non-self. Meanwhile, the rejection reaction of humoral immunity is used to detect unknown rootkit and generate specific antibody. Last, both known and unknown rootkits can be killed once detected. Based on this mechanism, a prototype system is implemented. And experimental results indicate that this mechanism possesses higher detection ratio and lower false ratio.
Keywords: computer viruses; feature extraction; process algebra; antibody; detection ratio; human being; humoral immunity mechanism; lower false ratio; pathogene; process algebra based rootkits-immune mechanism; prototype system; rejection reaction; rootkit feature extraction; self-antigens; tough rootkit; Algebra; Feature extraction; Generators; Immune system; Monitoring; Real-time systems; Viruses (medical); Kernel Security; Process Algebra; Rootkit-immune; Rootkits (ID#: 15-7154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162393&isnumber=7161655

 

Yi Lu; Qiang Yang; Wenyuan Xu; Zhiyun Lin; Wenjun Yan, “Cyber Security Assessment in PMU-Based State Estimation of Smart Electric Transmission Networks,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 3302-3307, 23-25 May 2015. doi:10.1109/CCDC.2015.7162490
Abstract: The adoption of a massive number of synchronized phasor measurement units (PMUs) supporting the wide-area measurement system (WAMS) in current electric transmission networks brings direct benefit in provision of accurate and timely network measurements, but also exposes a set of outstanding technical challenges in security aspect. This paper looks into the security problem of state estimation in WAMS in the context of cyber-physical system (CPS) which often exhibits complex structural characteristics and dynamic operational phenomenon. Typical attacks on PMUs and adverse impact on network state estimation are explored and studied through carrying out a set of simulation experiments using the IEEE 14-bus transmission network model. The preliminary numerical result quantifies the impact of PMU measurement data tampering on the state estimation accuracy and confirms that the PMU-based state estimation potentially can be significantly affected by various forms of cyber attacks.
Keywords: phasor measurement; power system security; power system state estimation; power transmission; CPS; IEEE 14-bus transmission network; PMU-based state estimation; WAMS; cyber security assessment; cyber-physical system; network state estimation; security problem; smart electric transmission networks; synchronized phasor measurement units; wide area measurement system; Current measurement; Monitoring; Phasor measurement units; State estimation; Transmission line measurements; Voltage measurement; PMU; Smart transmission network; State Estimation; WAMS (ID#: 15-7155)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162490&isnumber=7161655

 

Fangyuan Hou; Zhonghua Pang; Yuguo Zhou; Dehui Sun, “False Data Injection Attacks for a Class of Output Tracking Control Systems,” in Control and Decision Conference (CCDC), 2015 27th Chinese,  vol., no., pp.3319-3323, 23-25 May 2015. doi:10.1109/CCDC.2015.7162493
Abstract: With the development of cyber-physical systems (CPSs), the security becomes an important and challenging problem. Attackers can launch various attacks to destroy the control system performance. In this paper, a class of linear discrete-time time-invariant control systems is considered, which is open-loop critically stable and only has one critical eigenvalue. By including the output tracking error as an additional state, a Kalman filter-based augmented state feedback control strategy is designed to solve its output tracking problem. Then a stealthy false data attack is injected into the measurement output, which can completely destroy the output tracking control systems without being detected. Simulation results on a numerical example show that the proposed false data injection attack is effective.
Keywords: discrete time systems; linear systems; open loop systems; stability; state feedback; CPS development; Kalman filter-based augmented state feedback control strategy; control system performance; cyber-physical systems; eigenvalue; false data injection attacks; linear discrete-time time-invariant control system; open-loop stability; output tracking control systems; Computer security; Detectors; Kalman filters; Simulation; State feedback; Wireless sensor networks; Critically Stable; False Data Injection Attacks; Output Tracking Control (ID#: 15-7156)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162493&isnumber=7161655

 

Xuan Li; Qiaozhu Zhai; Wei Yuan; Jiebing Liu, “Improved Method of Quantitative Steady-State Security Assessment Based on Fast Elimination of Redundant Transmission Capacity Constraints,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 4242-4246, 23-25 May 2015. doi:10.1109/CCDC.2015.7162675
Abstract: Steady-state security analysis is of great importance to power systems. Steady-state security region (SSR) is a region-wise method that could improve the efficiency of steady-state security analysis. Based on SSR, steady-state security distance (SSD) was proposed in literature and SSD provides a quantitative tool for security assessment on a current operation point (OP) or operational state. However, a large scale optimization problem with many constraints must be solved when calculating SSD. In this paper, an improved method for calculating SSD is presented based on fast elimination of redundant transmission capacity constraints. The main idea is to use an analytic method instead of solving an optimization problem to get an over estimation on the maximal power flow on each transmission line, and then compare the result with the line capacity to identify whether the constraint is redundant. By using this method, the problem for calculating SSD is greatly simplified. Numerical tests are performed and the results are satisfactory.
Keywords: linear programming; power system security; fast elimination; maximal power flow; quantitative steady-state security assessment; redundant transmission capacity constraints; steady-state security distance; Generators; Load flow; Optimization; Power transmission lines; Security; Steady-state; Linear Programming; Redundant Constraints; Steady-State Security Distance; Steady-State Security Region (ID#: 15-7157)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162675&isnumber=7161655

 

Xiaoxia Wang; Naxin Cui; Hai Huang; Chenghui Zhang, “Vehicle Active Security Based on Driver Modeling,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 4984-4987, 23-25 May 2015. doi:10.1109/CCDC.2015.7162816
Abstract: The vehicle passive safety technology can only solve the problems caused by traffic accidents. The active safety technology, which can prevent and reduce accidents, would suffice for more far-reaching applications. In this paper Elman neural network is adopted to predict driver's behavior ahead of time. The “people oriented” driver-vehicle-road closed loop model is set up. The system would record the habits of the driver and warn in time when the behaviors of the driver deviate from the forecasted trajectory to a certain extent. Real time simulation is carried out, which is based on 3D urban road that acquired by GPS equipment. The results indicate that Elman algorithm can be used to establish the warning system of driver's improper operation and provide the reliable and valuable information for safe driving.
Keywords: Global Positioning System; alarm systems; computer graphics; driver information systems; neural nets; road accidents; road safety; road vehicles; trajectory control; 3D urban road; Elman algorithm; Elman neural network; GPS equipment; driver behavior prediction; driver modeling; far-reaching applications; forecasted trajectory; people oriented driver-vehicle-road closed loop model; real time simulation; traffic accidents; vehicle active safety technology; vehicle passive safety technology; Accidents; Real-time systems; Roads; Safety; Security; Three-dimensional displays; Vehicles; 3D Urban Road; Elman Network; Vehicle Active Security (ID#: 15-7158)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162816&isnumber=7161655

 

Wang Junwei; Fang Xiaoyi, “Improved TEEN Based Trust Routing Algorithm in WSNs,” in Control and Decision Conference (CCDC), 2015 27th Chinese, vol., no., pp. 4379-4382, 23-25 May 2015. doi:10.1109/CCDC.2015.7162699
Abstract: Deployed in harsh environment of wireless sensor network node is physically captured or damage easily, and its wireless communication pattern may lead network vulnerable by all kinds of interference and attacks. Therefore, the routing security is particularly important. Based on the in-depth analysis of wireless sensor networks protocol - TEEN protocol, combined with the trustworthy evaluation mechanism, an improved TEEN based trust routing algorithm in wireless sensor networks is put forward. A dynamic trust management mode is designed to ensure the credibility of node. The cluster head selection strategy and routing strategy of TEEN are improved to ensure the energy efficiency of the network, and the periodic data collection mechanism is introduced to determine the survival state of node. The simulation and performance evaluation show that the proposed algorithm has better performance.
Keywords: radiofrequency interference; routing protocols; telecommunication network management; telecommunication security; wireless sensor networks; TEEN protocol; WSN; attacks; dynamic trust management mode; harsh environment; head selection strategy; improved TEEN; interference; routing security; routing strategy; trust routing algorithm; trustworthy evaluation mechanism; wireless communication pattern; wireless sensor network node; wireless sensor networks protocol; Algorithm design and analysis; Clustering algorithms; Heuristic algorithms; Routing; Routing protocols; Wireless sensor networks; TEEN protocol; dynamic trust management; energy efficiency; trustworthy routing (ID#: 15-7159)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7162699&isnumber=7161655
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: CyberSA 2015, London

 

 
SoS Logo

International Conferences:

CyberSA 2015

 London


The 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) was held in London on 8-9 June 2015. Papers presented at the conference focused on the principles, methods, and applications of situational awareness on Cyber Systems, Business Information Systems (BIS), Computer Network Defence (CND), Computer Physical Systems (CPS) and Internet of Things (IoTs). 


Hall, M.J.; Hansen, D.D.; Jones, K., “Cross-Domain Situational Awareness and Collaborative Working for Cyber Security,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-8, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166110
Abstract: Enhancing situational awareness is a major goal for organisations spanning many sectors, working across many domains. An increased awareness of the state of environments enables improved decision-making. Endsley's model of situational awareness has improved the understanding for the design of decision-support systems. This paper presents and discusses a theoretical model to extend this to cross-domain working to influence the design of future collaborative systems. A use-case is discussed within a military context of the use of this model for cross-domain working between an operational-domain and cyber security-domain.
Keywords: decision making; decision support systems; groupware; security of data; collaborative working; cross-domain situational awareness; cyber security-domain; future collaborative systems; improved decision-making; operational-domain; Aerodynamics; Collaboration; Context; Decision making; Feeds; Malware; Collaboration; Cross Domain; Cyber Security; Situational Awareness (ID#: 15-6471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166110&isnumber=7166109

 

Neogy, S., “Security Management in Wireless Sensor Networks,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-4, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166112
Abstract: This paper aims to describe the characteristics of Wireless Sensor Networks (WSNs), challenges in designing a resource-constrained and vulnerable network and address security management as the main issue. The work begins with discussion on the attacks on WSNs. As part of protection against the attacks faced by WSNs, key management, the primary requirement of any security practice, is detailed out. This paper also deals with the existing security schemes covering various routing protocols. The paper also touches security issues concerning heterogeneous networks.
Keywords: routing protocols; telecommunication security; wireless sensor networks; WSN; heterogeneous networks; security management schemes; Cryptography; Receivers; Routing; Routing protocols; Wireless sensor networks; attack; cryptography; key management; protocol; routing; security; wireless sensor network (ID#: 15-6472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166112&isnumber=7166109

 

Rickus, A.; Pfluegel, E.; Atkins, N., “Chaos-Based Image Encryption Using an AONT Mode of Operation,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-5, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166113
Abstract: Chaos-based cryptography is a promising and emerging field that offers a large variety of techniques particularly suitable for applications such as image encryption. The fundamental characteristics of chaotic systems are closely related to the properties of a strong cryptosystem. Most research on chaos-based encryption does not concentrate on the aspect of encryption modes of operation. This paper introduces a new chaos-based image encryption scheme using an all-or-nothing transform (AONT) mode of operation. This results in a novel non-separable chaos-based mode which we have implemented and evaluated. Our results show that the AONT mode achieves a security gain with little overhead on the overall efficiency of the encryption.
Keywords: chaos; cryptography; image processing; transforms; AONT mode of operation; all-or-nothing transform mode of operation; chaos-based cryptography; chaos-based image encryption; nonseparable chaos-based mode; Chaotic communication; Ciphers; Encryption; Logistics; AONT encryption mode of operation; Baker map; Chaos-based cryptography; Logistic map
(ID#: 15-6473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166113&isnumber=7166109

 

Enache, A.-C.; Ionita, M.; Sgarciu, V., “An Immune Intelligent Approach for Security Assurance,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-5, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166116
Abstract: Information Security Assurance implies ensuring the integrity, confidentiality and availability of critical assets for an organization. The large amount of events to monitor in a fluid system in terms of topology and variety of new hardware or software, overwhelms monitoring controls. Furthermore, the multi-facets of cyber threats today makes it difficult even for security experts to handle and keep up-to-date. Hence, automatic “intelligent” tools are needed to address these issues. In this paper, we describe a ‘work in progress’ contribution on intelligent based approach to mitigating security threats. The main contribution of this work is an anomaly based IDS model with active response that combines artificial immune systems and swarm intelligence with the SVM classifier. Test results for the NSL-KDD dataset prove the proposed approach can outperform the standard classifier in terms of attack detection rate and false alarm rate, while reducing the number of features in the dataset.
Keywords: artificial immune systems; pattern classification; security of data; support vector machines; NSL-KDD dataset; SVM classifier; anomaly based IDS model; artificial immune system; asset availability; asset confidentiality; asset integrity; attack detection rate; cyber threats; false alarm rate; immune intelligent approach; information security assurance; intrusion detection system; security threats mitigation; support vector machines; swarm intelligence; Feature extraction; Immune system; Intrusion detection; Particle swarm optimization; Silicon; Support vector machines; Binary Bat Algorithm; Dendritic Cell Algorithm; IDS; SVM (ID#: 15-6474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166116&isnumber=7166109

 

Wurzenberger, M.; Skopik, F.; Settanni, G.; Fiedler, R., “Beyond Gut Instincts: Understanding, Rating and Comparing Self-Learning IDSs,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-1, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166117
Abstract: Today ICT networks are the economy's vital backbone. While their complexity continuously evolves, sophisticated and targeted cyber attacks such as Advanced Persistent Threats (APTs) become increasingly fatal for organizations. Numerous highly developed Intrusion Detection Systems (IDSs) promise to detect certain characteristics of APTs, but no mechanism which allows to rate, compare and evaluate them with respect to specific customer infrastructures is currently available. In this paper, we present BAESE, a system which enables vendor independent and objective rating and comparison of IDSs based on small sets of customer network data.
Keywords: security of data; APT; BAESE system; ICT networks; advanced persistent threats; customer infrastructures; customer network data; cyber attacks; economy vital backbone; intrusion detection systems; self-learning IDS; Analytical models; Complexity theory; Data models; Intrusion detection; Organizations; Safety (ID#: 15-6475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166117&isnumber=7166109

 

Bode, M.A.; Alese, B.K.; Oluwadare, S.A.; Thompson, A.F.-B., “Risk Analysis in Cyber Situation Awareness Using Bayesian Approach,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-12, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166119
Abstract: The unpredictable cyber attackers and threats have to be detected in order to determine the outcome of risk in a network environment. This work develops a Bayesian network classifier to analyse the network traffic in a cyber situation. It is a tool that aids reasoning under uncertainty to determine certainty. It further analyze the level of risk using a modified risk matrix criteria. The classifier developed was experimented with various records extracted from the KDD Cup'99 dataset with 490,021 records. The evaluations showed that the Bayesian Network classifier is a suitable model which resulted in same performance level for classifying the Denial of Service (DoS) attacks with Association Rule Mining while as well as Genetic Algorithm, the Bayesian Network classifier performed better in classifying probe and User to Root (U2R) attacks and classified DoS equally. The result of the classification showed that Bayesian network classifier is a classification model that thrives well in network security. Also, the level of risk analysed from the adapted risk matrix showed that DoS attack has the most frequent occurrence and falls in the generally unacceptable risk zone.
Keywords: Bayes methods; belief networks; computer network security; data mining; inference mechanisms; pattern classification; risk analysis; Bayesian approach; Bayesian network classifier; DoS attacks; KDD Cup 99 dataset;U2R attacks; association rule mining; classified DoS equally; cyber attackers; cyber situation; cyber situation awareness; cyber threats; denial of service attacks; genetic algorithm; modified risk matrix criteria; network environment; network security; network traffic analysis; risk analysis; user to root attacks; Bayes methods; Intrusion detection; Risk management; Telecommunication traffic; Uncertainty; Bayesian approach; Cyber Situation Awareness; KDD Cup'99; Risk matrix (ID#: 15-6476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166119&isnumber=7166109

 

Timonen, J., “Improving Situational Awareness of Cyber Physical Systems Based on Operator's Goals,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-6, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166121
Abstract: This paper focuses on discovering the key areas of Situational Awareness (SA) and Common Operational Picture (COP) in two different environments: the monitoring room and dismounted forces operations in urban areas. The research is based on scientific publications and on two implemented environments. In urban area warfare, the Mobile Urban Area Situational Awareness System is used to evaluate the requirements and usage of dismounted troops. The monitoring room is studied using the Situational Awareness of Critical Infrastructure and Networks System. These empirical environments were implemented during research projects at the Finnish National Defence University. The paper presents a model combining the joint model of laboratories, Endsley's model of SA and the results of goal-driven task analysis for creating a service-based architecture for defining and sharing COP. The main SA model used is Endsley's level model. It has been supplemented with cyber-related perspectives and fits the selected environments well, allowing techniques that can be used to measure the SA level and define the actor's most important goals.
Keywords: military computing; COP; Endsley's level model; SA; common operational picture; critical infrastructure; cyber physical systems; cyber-related perspectives; dismounted forces operations; dismounted troops; goal-driven task analysis; mobile urban area situational awareness system; monitoring room; networks system; requirement evaluation; scientific publications; service-based architecture; urban area warfare; Analytical models; Command and control systems; Computational modeling; Decision making; Monitoring; Stress; Urban areas; Common Operational Picture; Cyber Physical Systems; Situational Awareness; dismounted; operator (ID#: 15-6477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166121&isnumber=7166109

 

Onwubiko, C., “Cyber Security Operations Centre: Security Monitoring for Protecting Business and Supporting Cyber Defense Strategy,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-10, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166125
Abstract: Cyber security operations centre (CSOC) is an essential business control aimed to protect ICT systems and support an organisation's Cyber Defense Strategy. Its overarching purpose is to ensure that incidents are identified and managed to resolution swiftly, and to maintain safe & secure business operations and services for the organisation. A CSOC framework is proposed comprising Log Collection, Analysis, Incident Response, Reporting, Personnel and Continuous Monitoring. Further, a Cyber Defense Strategy, supported by the CSOC framework, is discussed. Overlaid atop the strategy is the well-known Her Majesty's Government (HMG) Protective Monitoring Controls (PMCs). Finally, the difficulty and benefits of operating a CSOC are explained.
Keywords: government data processing; security of data; CSOC framework; HMG protective monitoring controls; Her Majestys Government; ICT systems; business control; business protection; cyber defense strategy support; cyber security operations centre; information and communications technology; security monitoring; Business; Computer crime; Monitoring; System-on-chip; Timing; Analysis; CSOC; CSOC Benefits & Challenges; CSOC Strategy; Correlation; Cyber Incident Response; Cyber Security Operations Centre; Cyber Situational Awareness; CyberSA; Log Source; Risk Management; SOC (ID#: 15-6478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166125&isnumber=7166109

 

Skopik, F.; Wurzenberger, M.; Settanni, G.; Fiedler, R., “Establishing National Cyber Situational Awareness Through Incident Information Clustering,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-8, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166126
Abstract: The number and type of threats to modern information and communication networks has increased massively in the recent years. Furthermore, the system complexity and interconnectedness has reached a level which makes it impossible to adequately protect networked systems with standard security solutions. There are simply too many unknown vulnerabilities, potential configuration mistakes and therefore enlarged attack surfaces and channels. A promising approach to better secure today's networked systems is information sharing about threats, vulnerabilities and indicators of compromise across organizations; and, in case something went wrong, to report incidents to national cyber security centers. These measures enable early warning systems, support risk management processes, and increase the overall situational awareness of organizations. Several cyber security directives around the world, such as the EU Network and Information Security Directive and the equivalent NIST Framework, demand specifically national cyber security centers and policies for organizations to report on incidents. However, effective tools to support the operation of such centers are rare. Typically, existing tools have been developed with the single organization as customer in mind. These tools are often not appropriate either for the large amounts of data or for the application use case at all. In this paper, we therefore introduce a novel incident clustering model and a system architecture along with a prototype implementation to establish situational awareness about the security of participating organizations. This is a vital prerequisite to plan further actions towards securing national infrastructure assets.
Keywords: business data processing; national security; organisational aspects; pattern clustering; security of data; software architecture; EU Network and Information Security Directive; NIST framework; attack channels; attack surfaces; cyber security directives; early warning systems; incident information clustering; information and communication networks; information sharing; national cyber security centers; national cyber situational awareness; national infrastructure assets; networked systems protection; organizations; risk management processes; standard security solutions; system architecture; system complexity; system interconnectedness; threats; Clustering algorithms; Computer security; Information management; Market research; Organizations; Standards organizations (ID#: 15-6479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166126&isnumber=7166109

 

Aggarwal, P.; Grover, A.; Singh, S.; Maqbool, Z.; Pammi, V.S.C.; Dutt, V., “Cyber Security: A Game-Theoretic Analysis of Defender and Attacker Strategies in Defacing-Website Games,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-8, 8-9 June 2015.doi:10.1109/CyberSA.2015.7166127
Abstract: The rate at which cyber-attacks are increasing globally portrays a terrifying picture upfront. The main dynamics of such attacks could be studied in terms of the actions of attackers and defenders in a cyber-security game. However currently little research has taken place to study such interactions. In this paper we use behavioral game theory and try to investigate the role of certain actions taken by attackers and defenders in a simulated cyber-attack scenario of defacing a website. We choose a Reinforcement Learning (RL) model to represent a simulated attacker and a defender in a 2×4 cyber-security game where each of the 2 players could take up to 4 actions. A pair of model participants were computationally simulated across 1000 simulations where each pair played at most 30 rounds in the game. The goal of the attacker was to deface the website and the goal of the defender was to prevent the attacker from doing so. Our results show that the actions taken by both the attackers and defenders are a function of attention paid by these roles to their recently obtained outcomes. It was observed that if attacker pays more attention to recent outcomes then he is more likely to perform attack actions. We discuss the implication of our results on the evolution of dynamics between attackers and defenders in cyber-security games.
Keywords: Web sites; computer crime; computer games; game theory; learning (artificial intelligence);RL model; attacker strategies; attacks dynamics; behavioral game theory; cyber-attacks; cyber-security game; defacing Website games; defender strategies; game-theoretic analysis; reinforcement learning; Cognitive science; Computational modeling; Computer security; Cost function; Games; Probabilistic logic; attacker; cognitive modeling; cyber security; cyber-attacks; defender; reinforcement-learning model (ID#: 15-6480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166127&isnumber=7166109

 

Bjerkestrand, T.; Tsaptsinos, D.; Pfluegel, E., “An Evaluation of Feature Selection and Reduction Algorithms for Network IDS Data,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-2, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166129
Abstract: Intrusion detection is concerned with monitoring and analysing events occurring in a computer system in order to discover potential malicious activity. Data mining, which is part of the procedure of knowledge discovery in databases, is the process of analysing the collected data to find patterns or correlations. As the amount of data collected, store and processed only increases, so does the significance and importance of intrusion detection and data mining. A dataset that has been particularly exposed to research is the dataset used for the Third International Knowledge Discovery and Data Mining Tools competition, KDD99. The KDD99 dataset has been used to identify what data mining techniques relate to certain attack and employed to demonstrate that decision trees are more efficient than the Naïve Bayes model when it comes to detecting new attacks. When it comes to detecting network intrusions, the C4.5 algorithm performs better than SVM. The aim of our research is to evaluate and compare the usage of various feature selection and reduction algorithms against publicly available datasets. In this contribution, the focus is on feature selection and reduction algorithms. Three feature selection algorithms, consisting of an attribute evaluator and a test method, have been used. Initial results indicate that the performance of the classifier is unaffected by reducing the number of attributes.
Keywords: Bayes methods; data mining; decision trees; feature selection; security of data; C4.5 algorithm; KDD99 dataset; SVM; computer system; data mining technique; decision tree; feature selection; intrusion detection; naive Bayes model; network IDS data; network intrusion; potential malicious activity; reduction algorithm; third international knowledge discovery and data mining tools competition; Algorithm design and analysis; Classification algorithms; Data mining; Databases; Intrusion detection; Knowledge discovery; Training; KDD dataset; feature selection and reduction; intrusion detection; knowledge discovery (ID#: 15-6481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166129&isnumber=7166109

 

Evangelopoulou, M.; Johnson, C.W., “Empirical Framework for Situation Awareness Measurement Techniques in Network Defense,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-4, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166132
Abstract: This paper presents an empirical framework for implementing Situation Awareness Measurement Techniques in a Network Defense environment. Bearing in mind the rise of Cyber-crime and the importance of Cyber security, the role of the security analyst (or as this paper will refer to them, defenders) is critical. In this paper the role of Situation Awareness Measurement Techniques will be presented and explained briefly. Input from previous studies will be given and an empirical framework of how to measure Situation Awareness in a computing network environment will be offered in two main parts. The first one will include the networking infrastructure of the system. The second part will be focused on specifying which Situation Awareness Techniques are going to be used and which Situation Awareness critical questions need to be asked to improve future decision making in cyber-security. Finally, a discussion will take place concerning the proposed approach, the chosen methodology and further validation.
Keywords: computer crime; computer network security; decision making; computing network environment; cyber-crime; cybersecurity; decision making; network defense environment; situation awareness measurement techniques; Computer security; Decision making; Human factors; Measurement techniques; Monitoring; Unsolicited electronic mail; Cyber Security; CyberSA; Decision Making; Intrusion Detection; Network Defense; Situation Awareness; Situation Awareness Measurement Techniques
(ID#: 15-6482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166132&isnumber=7166109

 

Shovgenya, Y.; Skopik, F.; Theuerkauf, K., “On Demand for Situational Awareness for Preventing Attacks on the Smart Grid,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-4, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166133
Abstract: Renewable energy sources and widespread small-scale power generators change the structure of the power grid, where actual power consumers also temporarily become suppliers. Smart grids require continuous management of complex operations through utility providers, which leads to increasing interconnections and usage of ICT-enabled industrial control systems. Yet, often insufficiently implemented security mechanisms and the lack of appropriate monitoring solutions will make the smart grid vulnerable to malicious manipulations that may possibly result in severe power outages. Having a thorough understanding about the operational characteristics of smart grids, supported by clearly defined policies and processes, will be essential to establishing situational awareness, and thus, the first step for ensuring security and safety of the power supply.
Keywords: electric generators; electricity supply industry; industrial control; power consumption; power generation control; power generation reliability; power system interconnection; power system management; power system security; renewable energy sources; smart power grids; ICT-enabled industrial control system; actual power consumer; implemented security mechanism; power supply safety; power supply security; renewable energy source; situational awareness; small-scale power generator; smart power grid; Europe; Generators; Power generation; Renewable energy sources; Security; Smart grids; Smart meters; industrial control systems; situational awareness; smart generator; smart grid (ID#: 15-6483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166133&isnumber=7166109

 

Adenusi, D.; Alese, B.K; Kuboye, B.M.; Thompson, A.F.-B., “Development of Cyber Situation Awareness Model,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-11, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166135
Abstract: This study designed and simulated cyber situation awareness model for gaining experience of cyberspace condition. This was with a view to timely detecting anomalous activities and taking proactive decision safeguard the cyberspace. The situation awareness model was modelled using Artificial Intelligence (AI) technique. The cyber situation perception sub-model of the situation awareness model was modelled using Artificial Neural Networks (ANN). The comprehension and projection submodels of the situation awareness model were modelled using Rule-Based Reasoning (RBR) techniques. The cyber situation perception sub-model was simulated in MATLAB 7.0 using standard intrusion dataset of KDD'99. The cyber situation perception sub-model was evaluated for threats detection accuracy using precision, recall and overall accuracy metrics. The simulation result obtained for the performance metrics showed that the cyber-situation sub-model of the cybersituation model better with increase in number of training data records. The cyber situation model designed was able to meet its overall goal of assisting network administrators to gain experience of cyberspace condition. The model was capable of sensing the cyberspace condition, perform analysis based on the sensed condition and predicting the near future condition of the cyberspace.
Keywords: artificial intelligence; inference mechanisms; knowledge based systems; mathematics computing; neural nets; security of data; AI technique; ANN; Matlab 7.0; RBR techniques; anomalous activities detection; artificial neural networks; cyber situation awareness model; cyberspace condition; proactive decision safeguard; rule-based reasoning; training data records; Artificial neural networks; Computational modeling; Computer security; Cyberspace; Data models; Intrusion detection; Mathematical model; Artificial Intelligence; Awareness; cyber-situation; cybersecurity; cyberspace (ID#: 15-6484)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166135&isnumber=7166109

 

Laing, C.; Vickers, P., “Context Informed Intelligent Information Infrastructures for Better Situational Awareness,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-7, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166136
Abstract: In this multi-disciplinary project, we intend to explore the advantages of an information fusion system in which the infrastructure finds new ways to reflect upon its own state and new ways to express this state that provides a good fit to human communication and cognition processes. This interplay should then generate a better and more responsive humancomputer symbiosis. The outcomes of this project will help to develop context and content aware networks that are better able to extract meaning and understanding from network data and behaviour.
Keywords: cognition; human computer interaction; information networks; knowledge based systems; sensor fusion; ubiquitous computing; cognition process; context informed intelligent information infrastructures; human communication; human-computer symbiosis; information fusion system; multidisciplinary project; situational awareness; Computers; Context; Monitoring; Real-time systems; Sonification; System-on-chip; Telecommunication traffic; context informed; information infrastructures; situational awareness (ID#: 15-6485)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166136&isnumber=7166109

 

Nasir, M.A.; Nefti-Meziani, S.; Sultan, S.; Manzoor, U., “Potential Cyber-Attacks Against Global Oil Supply Chain,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1-7, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166137
Abstract: The energy sector has been actively looking into cyber risk assessment at a global level, as it has a ripple effect; risk taken at one step in supply chain has an impact on all the other nodes. Cyber-attacks not only hinder functional operations in an organization but also waves damaging effects to the reputation and confidence among shareholders resulting in financial losses. Organizations that are open to the idea of protecting their assets and information flow and are equipped; enough to respond quickly to any cyber incident are the ones who prevail longer in global market. As a contribution we put forward a modular plan to mitigate or reduce cyber risks in global supply chain by identifying potential cyber threats at each step and identifying their immediate countermeasures.
Keywords: globalisation; organisational aspects; petroleum industry; risk management; security of data; supply chain management; cyber incident; cyber risk assessment; cyber-attack; damaging effect; energy sector; financial losses; global market; global oil supply chain; global supply chain; information flow; organization; ripple effect; Companies; Computer hacking; Information management; Supply chains; Temperature sensors; cyber-attacks; cyber-attacks countermeasures; oil supply chain; threats to energy sector (ID#: 15-6486)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166137&isnumber=7166109

 

Dahri, K.; Rajput, S.; Memon, S.; Das Dhomeja, L., “Smart Activities Monitoring System (SAMS) for Security Applications,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp.1-5, 8-9 June 2015. doi:10.1109/CyberSA.2015.7166138
Abstract: In this paper, an android based SAMS (Smart Activities Monitoring System) application for smart phone is proposed. This application is developed with the aim of increasing the national security in Pakistan. In last decade, various incidents including militant attacks and ransom-demands have been reported in which cell phones played a central role in communication between the culprits. The tracking of these criminals is very important and the government needs to adopt technologies to track mobile phones if they are being used for dangerous activities. In this paper, an android based application is presented which is designed and tested to track a suspect without his/her attention. This application tracks a smartphone by obtaining its current location and monitors a suspect remotely by retrieving information such as call logs, message logs etc. It also detects the face of the suspect and covertly captures the picture using cell phone camera and then sends it via multiple messages. Moreover, the monitoring user can also make calls to the phone which the culprit is using in stealth mode to hear the conversation happening in surroundings of the user without the knowledge of suspect.
Keywords: law administration; mobile computing; police data processing; security; smart phones; Android based application; SAMS; criminal activity; law enforcement agency; security application; smart activities monitoring system; smart phone; Cellular phones; Global Positioning System; Mobile communication; Monitoring; Servers; Smart phones; GPS location; security apps; smartphones; tracking (ID#: 15-6487)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166138&isnumber=7166109
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: IBCAST 2015, Islamabad

 

 
SoS Logo

International Conferences:

IBCAST 2015

Islamabad


The Twelfth International Bhurban Conference on Applied Sciences & Technology (IBCAST) was held at the National Centre for Physics, Islamabad Pakistan on January 13-18, 2015. It was organized by the Centres of Excellence in Science & Applied Technologies (CESAT), Islamabad, in collaboration with Beihang University of Aeronautics & Astronautics, Beijing Institute of Technology, Nanjing University of Aeronautics & Astronautics and Northwestern Polytechnical University, Xian, China. Topics included Advanced Materials, Biomedical Sciences, Control & Signal Processing, Cyber Security, Fluid Dynamics, Underwater Technologies and Wireless Communication & Radar. The cybersecurity papers are cited here and were recovered on September 3, 2015.


Saghar, K.; Kendall, D.; Bouridane, A., “RAEED: A Solution for Hello Flood Attack,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 248-253, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058512
Abstract: Hello flood attack has long been a problem in ad-hoc and wireless networks during data routing. Although numerous solutions have been proposed, they all have drawbacks. The main reason is that formal modeling techniques have not been employed to confirm whether the solutions are immune from DoS attacks. We have earlier shown how formal modeling can be utilized efficiently to detect the vulnerabilities of existing routing protocols against DoS attacks. In this paper we propose a new protocol, RAEED (Robust formally Analysed protocol for wirEless sEnsor networks Deployment), which is able to address the problem of Hello flood attacks. Using formal modeling we prove that RAEED avoids these types of attack. Finally computer simulations were carried out to support our findings. RAEED employs an improved bidirectional verification and the key exchange characteristics of the INSENS and the LEAP. RAEED preserves the security and reduces traffic. The improvements in RAEED were the less number of messages exchanged, less percentage of messages lost and reduction in time to complete key setup phase.
Keywords: computer network security; formal verification; mobile computing; routing protocols; telecommunication traffic; wireless sensor networks; DoS attacks; INSENS; LEAP; RAEED protocol; ad-hoc networks; bidirectional verification; computer simulations; data routing; formal modeling techniques; hello flood attack; key exchange characteristics; message exchange; messages lost; robust formally analysed protocol for wireless sensor networks deployment; security; traffic reduction; Amplitude shift keying; Computational modeling; Computer crime; Noise; Routing protocols; Wireless sensor networks; Formal Modeling; Routing Protocol; Security Attacks; Wireless Sensor Networks (WSN) (ID#: 15-6488)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058512&isnumber=7058466

 

Fatima, T.; Saghar, K.; Ihsan, A., “Evaluation of Model Checkers SPIN and UPPAAL for Testing Wireless Sensor Network Routing Protocols,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 263-267, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058514
Abstract: Formal modeling and verification has been under considerable attraction of researchers these days. Using formal methods one can find bugs and hidden errors in different systems, codes and protocols. As formal models can detect worst case scenarios which are not possible in computer simulations and other testing techniques, they are often employed by researchers to detect flaws in security protocols. A lot of hidden errors have been detected in encryption techniques and secure routing protocols by analyzing them using formal modeling and verification. Although many tools have been developed to perform formal verification; but SPIN and UPPAAL are most frequently used by researchers to demonstrate some previously unreported weaknesses. This paper analyzes these two model checkers in terms of learning time, ease of use and their features of modeling and verification. We later annotate our findings by applying these tools against a wire-less sensor network routing protocol. We claim that our paper can help future researchers to decide which formal modeling tool is best in a particular scenario thus saving a lot of time in decision making.
Keywords: cryptography; decision making; formal verification; routing protocols; telecommunication network reliability; telecommunication security; wireless sensor networks; SPIN; UPPAAL; decision making; encryption techniques; formal methods; formal modeling; formal verification; model checkers; secure routing protocols; security protocols; testing techniques; wireless sensor network routing protocols; Analytical models; Automata; Computational modeling; Model checking; Routing protocols; Wireless sensor networks; Formal Verification; Routing Protocols; Sensor Networks; Software Testing (ID#: 15-6489)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058514&isnumber=7058466

 

Kashif, U.A.; Memon, Z.A.; Balouch, A.R.; Chandio, J.A., “Distributed Trust Protocol for IaaS Cloud Computing,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 275-279, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058516
Abstract: Due to economic benefits of cloud computing, consumers have rushed to adopt Cloud Computing. Apart from rushing into cloud, security concerns are also raised. These security concerns cause trust issue in adopting cloud computing. Enterprises adopting cloud, will have no more control over data, application and other computing resources that are outsourced from cloud computing provider. In this paper we propose a novel technique that will not leave consumer alone in cloud environment. Firstly we present theoretical analysis of selected state of the art technique and identified issues in IaaS cloud computing. Secondly we propose Distributed Trust Protocol for IaaS Cloud Computing in order to mitigate trust issue between cloud consumer and provider. Our protocol is distributed in nature that lets the consumer to check the integrity of cloud computing platform that is in the premises of provider's environment. We follow the rule of security duty separation between the premises of consumer and provider and let the consumer be the actual owner of the platform. In our protocol, user VM hosted at IaaS Cloud Computing uses Trusted Boot process by following specification of Trusted Computing Group (TCG) and by utilizing Trusted Platform Module (TPM) Chip of the consumer. The protocol is for the Infrastructure as a Service IaaS i.e. lowest service delivery model of cloud computing.
Keywords: cloud computing; formal specification; security of data; trusted computing; virtual machines; IaaS cloud computing; Infrastructure as a Service; TCG specification; TPM chip; Trusted Computing Group; cloud computing platform integrity checking; cloud consumer; cloud environment; cloud provider; computing resources; distributed trust protocol; economic benefit; security concern; security duty separation; service delivery model; trust issue mitigation; trusted boot process; trusted platform module chip; user VM; Hardware; Information systems; Security; Virtual machine monitors; Trusted cloud computing; cloud security and trust; trusted computing; virtualization (ID#: 15-6490)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058516&isnumber=7058466

 

Jalalzai, M.H.; Shahid, W.B.; Iqbal, M.M.W., “DNS Security Challenges and Best Practices to Deploy Secure DNS with Digital Signatures,” Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 280-285, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058517
Abstract: This paper is meant to discuss the DNS security vulnerabilities and best practices to address DNS security challenges. The Domain Name System (DNS) is the foundation of internet which translates user friendly domains, named based Resource Records (RR) into corresponding IP addresses and vice-versa. Nowadays usage of DNS services are not merely for translating domain names, but it is also used to block spam, email authentication like DKIM and the latest DMARC, the TXT records found in DNS are mainly about improving the security of services. So, virtually almost every internet application is using DNS. If not works properly then whole internet communication will collapse. Therefore security of DNS infrastructures is one of the core requirements for any organization in current cyber security arena. DNS are favorite place for attackers due to huge loss of its outcome. So breach in DNS security will in resultant affects the trust worthiness of whole internet. Therefore security of DNS is paramount, in case DNS infrastructure is vulnerable and compromised, organizations lose their revenue, they face downtime, customer dissatisfaction, privacy loss, confront legal challenges and many more. As we know that DNS is now become the largest distributed database, but initially at the time of DNS design the only goal was to provide scalable and available name resolution service but its security perspectives were not focused and overlooked at that time. So there are number of security flaws exist and there is an urgent requirement to provide some additional mechanism for addressing known vulnerabilities. From these security challenges, most important one is DNS data integrity and availability. For this purpose we introduced cryptographic framework that is configured on open source platform by incorporating DNSSEC with Bind DNS software which addresses integrity and availability issues of DNS by establishing DNS chain of trust using digitally signed DNS data.
Keywords: Internet; computer network security; cryptography; data integrity; data privacy; digital signatures; distributed databases; public domain software; Bind DNS software; DKIM; DMARC; DNS availability issues; DNS chain; DNS data integrity; DNS design; DNS infrastructures; DNS security; DNS security vulnerabilities; DNS services; DNSSEC; IP addresses; Internet application; Internet communication; Internet trustworthiness; cryptographic framework; customer dissatisfaction; cyber security arena; digital signatures; digitally signed DNS data; distributed database; domain name system; email authentication; index TXT services; named based resource records; open source platform; privacy loss; secure DNS; security flaws; user friendly domains; Best practices; Computer crime; Cryptography; Internet; Servers; Software; DNS Security; DNS Vulnerabilities; DNSSEC; Digital Signatures; Network and Computer Security; PKI (ID#: 15-6491)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058517&isnumber=7058466

 

Islam, S.; Haq, I.U.; Saeed, A., “Secure End-to-End SMS Communication over GSM Networks,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 286-292, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058518
Abstract: In today's GSM networks, security mechanisms provided by network operators are limited to the wireless links only, leaving the information traveling over the wired links insecure to a large extent. Moreover, the encryption algorithms used over the wireless links provide weak notion of security. Thus end-to-end security for SMS communication is not achieved in current GSM networks. An adversary is able to capture the traffic over the wireless link and decrypt it using specialized hardware. Short Message Service (SMS) is used widely all over the world which may contain sensitive and confidential information like financial transactions. SMS spoofing applications are widely available through which any sender ID can be set. The objectives of this research includes end-to-end confidentiality, authentication, message integrity and non-repudiation of SMS. The proposed scheme uses symmetric key and identity based techniques for encryption and key management. The overhead incurred due to addition of control information may increase the message length but the computational delay due to cryptographic operations is negligible on mobile devices with 1GHz+ processors. The proposed solution ensures end-to-end security even if the transmission is tapped, leaked or sniffed on either the wired or wireless links.
Keywords: cellular radio; cryptography; electronic messaging; message authentication; mobile computing; telecommunication security; GSM networks; SMS nonrepudiation; SMS spoofing applications; authentication; computational delay; confidential information; cryptographic operations; encryption algorithms; end-to-end confidentiality; end-to-end security; financial transactions; identity based techniques; key management; message integrity; message length; mobile devices; network operators; secure end-to-end SMS communication; security mechanisms; sender ID; sensitive information; short message service; symmetric key; wired links; wireless links; Encryption; Program processors; Receivers (ID#: 15-6492)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058518&isnumber=7058466

 

Siddiqui, R.A.; Grosvenor, R.I.; Prickett, P.W., “dsPIC-Based Advanced Data Acquisition System for Monitoring, Control and Security Applications,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 293-298, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058519
Abstract: This paper reports on design and implementation of data acquisition system based on dsPIC Microcontroller for Monitoring, Control and Security Application. Data acquisition is fundamental stage in any DSP, monitoring and digital control and security system. The efficiency and effectiveness of the system is defined by the quality of acquired data, which in turn depends on the characteristics of data acquisition system. There are two types of data acquisition; (a) digital (b) analog data acquisition, having different characteristics and system requirements. Microchip's dsPIC provides various on-chip integrated modules which enable efficient data acquisition such as 10/12-bit Analog to Digital Convertor (ADC) with up to 1Msps (Million samples per second) sampling rate, simultaneous sampling and various trigger mechanisms, Timers, Input Capture (IC), External (hardware) and Internal (software) Interrupt and processing capability up to 30 MIPS (Million Instructions Per Second). A system is developed for data acquisition of 16 analog signals with 10/12-bit resolution, simultaneous sampling of 4 signals, fixed and variable sampling rate, on chip storage and real-time signal processing capabilities. The system also supports for data acquisition of digital signals with time resolution of up to 33.33nsec and signal parameters like frequency, time period, pulse width, duty cycle, and delay & time difference between two signals. It can be customized according to the system requirements and provides advanced data acquisition capabilities to the low cost monitoring, control or security system.
Keywords: analogue-digital conversion; data acquisition; digital control; digital signal processing chips; microcontrollers;10-12-bit analog-digital convertor;10-12-bit resolution; 16 analog signals; 30 MIPS; ADC; DSP; advanced data acquisition capabilities; analog data acquisition; chip storage; control-security system; delay time difference; digital control-security system; digital data acquisition; dsPIC microcontroller; dsPIC-based advanced data acquisition system; duty cycle; efficient data acquisition; external hardware; internal software; low cost monitoring; microchip dsPIC; on-chip integrated modules; pulse width; real-time signal processing; timers; variable sampling rate; Security; ADC; DSP; Data Acquisition; MIPS; Microchip; Monitoring; Security; dsPIC (ID#: 15-6493)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058519&isnumber=7058466

 

Arifeen, F.U.; Siddiqui, R.A.; Ashraf, S.; Waheed, S., “Inter-Cloud Authentication Through X.509 for Defense Organization,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 299-306, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058520
Abstract: Over the recent years of research in cloud computing, different approaches are adopted for Inter-Cloud Authentication. These approaches give successful results in identifying the authentic request. Defense organization communicate with each other's through legitimate requests. For establishing a security and privacy, a PKI based authentication model is needed. This paper signifies a new approach in implementing cloud based PKI authentication inside the existing infrastructure of defense organization. As security is the prime concern for any organization and its implementation requirement varies from organization to organization, each and every organization embrace their own policies to implement it. The problem of understanding each other's security policies is a huge barrier and challenge for existing IT infrastructure for implementation purposes. Requirement to establish Inter-Cloud Authentication is made possible through this PKI based model which ensures all five security services i.e. confidentiality, integrity, authentication, digital signature and non-repudiation. This PKI model is a multi-domain atmosphere between various defense organization and their Data Centers (DC) for the facilitation and resource provisioning inside the cloud platform. This model utilizes the existing network infrastructure composed of high intercommunication traffic between various Data Centers of defense organization. In this model, a nationwide Certification Authority (CA) is implemented in the Inter-Cloud infrastructure and all other Data Centers are inter-communicated through this mechanism having different authentication approaches for legitimate access through the X.509 Certificates.
Keywords: cloud computing; computer centres; computer network security; data integrity; data privacy; digital signatures; organisational aspects; public key cryptography; telecommunication traffic; IT infrastructure; PKI based authentication model; X.509; certification authority; cloud based PKI authentication; cloud platform; data center; data confidentiality; defense organization; digital signature; intercloud authentication; intercloud infrastructure; intercommunication traffic; multidomain atmosphere; network infrastructure; non-repudiation; resource provisioning; security policies; security services; Hardware; Organizations; Public key cryptography; Software; Virtual private networks; Certification Authority (CA); Data Centers; Inter-Cloud; Master CA; Public Key Infrastructure (PKI); VPN; X.509 Certificate Services (ID#: 15-6494)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058520&isnumber=7058466

 

Ishfaq, H.; Iqbal, W.; Bin Shahid, W., “Attaining Accessibility and Personalization with Socio-Captcha (SCAP),” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 307-311, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058521
Abstract: Many websites have made use of motions, videos, flash, gif animations and static images to implement Captcha in order to ensure that the entity trying to connect to their website(s) or system is not a Bot, but a human being. A wide variety of Captcha types and solution methods are available and few are described in section II. All of these Captcha systems possess the functionality of distinguishing humans and Bots but lack in providing personalization attribute(s) whilst browsing the internet or using any networking application. This paper has suggested a novel scheme for generation of Captcha by attaining accessibility and personalization through user's social media profile attributes Socio-Captcha (SCAP). This Socio-Captcha Scheme relies on Socio-Captcha application which is discussed in this paper.
Keywords: security of data; social networking (online); Internet; SCAP; Web sites; personalization attribute; social media profile; socio-captcha scheme; CAPTCHAs; Clothing; Electronic publishing; Facebook; Frequency modulation; Information services; Lead; accessibility; bot; captcha; human; personalization; social media; web (ID#: 15-6495)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058521&isnumber=7058466

 

Amin, M.; Afzal, M., “On the Vulnerability of EC DRBG,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 318-322, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058523
Abstract: Random Number Generation is an important element of any cryptographic function. National Institute of Standards and Technology (NIST) has also developed few Random Number Generators, Dual Elliptic Curve Deterministic Random Bit Generator(Dual EC DRBG) is one of them. Over a period of time, various sources highlighted that Dual EC DRBG has vulnerability, that its next output can be predicted with the help of previous output. However very limited material is available to provide an insight to understand the vulnerability. This paper has provided a proof of concept on the vulnerability in Dual EC DRBG with explaining the working of DRBG and related flaw. The paper has also proposed the solution to overcome the said flaw in Dual EC DRBG.
Keywords: public key cryptography; random number generation; Dual EC DRBG vulnerability; NIST; National Institute of Standards and Technology; cryptographic function; dual elliptic curve deterministic random bit generator; Elliptic curves; Entropy; Generators; Random number generation; Elliptic Curves; Random Numbers (ID#: 15-6496)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058523&isnumber=7058466

 

Tanveer, A.; Ali, A.; Paracha, M.A.; Raja, F.R., “Performance Analysis of AES-finalists Along with SHS in IPSEC VPN over 1Gbps Link,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp.323-332, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058524
Abstract: IPSEC is suit of protocols designed to provide secure communication over Network Layer (Layer-3) of TCP/IP model. Participating IPSEC gateways may have different algorithms installed in them but RFC-4835 mentions mandatory algorithms that a gateway must have so that participating gateways always have at least one algorithmic combination to agree upon. Off the shelve IPSEC implementations only implement these mandatory algorithms. In this paper, the enhancements involve the selection of hashing and encryption algorithms that yield better performance for the given system. All AES finalists and SHS algorithms have been embedded after some modifications in 64 bit RHEL 6.2 Linux kernel (2.6.32) and Openswan 2.6.38 (A user space agent which helps gateways to negotiate security associations between them) and performance analysis of these algorithms having throughput as the main parameter over 1 Gbps link in an IPSEC VPN has been done. For this purpose, all the combinations of block ciphers with different key lengths along with hashing algorithms are tested and analyzed under same operating conditions. Comparative results are shown with respect to every combination of AES finalists with every hashing algorithm of SHS and MD5. Furthermore, All the AES finalists have also been tested without hashing algorithms.
Keywords: Linux; computer network security; cryptographic protocols; internetworking; operating system kernels; transport protocols; virtual private networks; AES finalist performance analysis; IPSEC VPN network layer; IPSEC gateway; Openswan 2.6.38; RHEL 6.2 Linux kernel; SHS algorithm; TCP-IP protocol model; advanced encryption standard; bit rate 1 Gbit/s; cipher blocking; encryption algorithm; hashing algorithm; off the shelve IPSEC implementation; secure communication; secure hash standard; user space agent; Authentication; Encryption; IP networks; Logic gates; Payloads (ID#: 15-6497)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058524&isnumber=7058466

 

Javed, A.; Akhlaq, M., “Patterns in Malware Designed for Data Espionage and Backdoor Creation,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 338-342, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058526
Abstract: In the recent past, malware have become a serious cyber security threat which has not only targeted individuals and organizations but has also threatened the cyber space of countries around the world. Amongst malware variants, trojans designed for data espionage and backdoor creation dominates the threat landscape. This necessitates an in depth study of these malware with the scope of extracting static features like APIs, strings, IP Addresses, URLs, email addresses etc. by and large found in such malicious codes. Hence in this research paper, an endeavor has been made to establish a set of patterns, tagged as APIs and Malicious Strings persistently existent in these malware by articulating an analysis framework.
Keywords: application program interfaces; feature extraction; invasive software; APIs; backdoor creation; cyber security threat; data espionage; malicious codes; malicious strings; malware; static feature extraction; trojans; Accuracy; Feature extraction; Lead; Malware; Sensitivity (ID#: 15-6498)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058526&isnumber=7058466

 

Saboor, A.; Aslam, B., “Analyses of Flow Based Techniques to Detect Distributed Denial of Service Attacks,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 354-362, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058529
Abstract: Distributed Denial of Service (DDoS) attacks comprise of sending huge network traffic to a victim system using multiple systems. Detecting such attacks has gained much attention in current literature. Studies have shown that flow-based anomaly detection mechanisms give promising results as compared to typical signature based attack detection mechanisms which have not been able to detect such attacks effectively. For this purpose, a variety of flow-based DDoS detection algorithms have been put forward. We have divided the flow-based DDoS attack detection techniques broadly into two categories namely, packet header based and mathematical formulation based. Analyses has been done for two techniques one belonging to each category. The paper has analyzed and evaluated these with respect to their detection accuracy and capability. Finally, we have suggested improvements that can be helpful to give results better than both the previously proposed algorithms. Furthermore, our findings can be applied to DDoS detection systems for refining their detection capability.
Keywords: computer network security; mathematical analysis; telecommunication traffic; flow-based anomaly detection mechanisms; flow-based distributed denial of service attack detection techniques; mathematical formulation; multiple systems; network traffic; packet header; signature based attack detection mechanisms; victim system; Correlation; Correlation coefficient; IP networks; Distributed Denial of Service Attack; Exploitation Tools; Flow-based attack detection; Intrusion Detection; cyber security (ID#: 15-6499)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058529&isnumber=7058466

 

Raza, F.; Bashir, S.; Tauseef, K.; Shah, S.I., “Optimizing Nodes Proportion for Intrusion Detection in Uniform and Gaussian Distributed Heterogeneous WSN,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on, vol., no., pp. 623-628, 13-17 Jan. 2015. doi:10.1109/IBCAST.2015.7058571
Abstract: In wireless sensor networks (WSN), intrusion detection applications have gained significant importance because of diverse implementations including tracking malicious intruder in the battlefield. Network parameters such as allowable distance, sensing range, transmission range, and node density plays important role in designing a model according to specific applications. Numerous models have been proposed to efficiently deploy WSNs for these applications. However, deviated requirements of different applications make it difficult to develop a generic model. Another important factor with significant contribution towards the performance of a WSN is the strategy adopted for distribution of the sensor nodes in the area of interest. The most common method is to deploy the sensors is either through uniform or gaussian distribution. Several performance comparisons have been reported to evaluate the detection probability and analyze its dependency on various network parameters. Another aspect fundamental to the performance of a sensor network is heterogeneity. Practically, for economic or logistic reasons, it may not be possible to ensure availability of nodes with identical features e.g. sensing range, transmission/detection capability etc. It is, therefore, important to assess the detection performance of the network when the nodes do not possess same sensing range. In this paper we analyze the impact of various node densities in calculating detection probability in a Uniform and Gaussian distributed heterogeneous network under K-sensing model. Experimental results provide optimal values of node densities for efficient deployment in heterogeneous WSN environment.
Keywords: Gaussian distribution; object detection; optimisation; safety systems; wireless sensor networks; K-sensing model; allowable distance; battlefield; detection probability evaluation; economic reasons; generic model; intrusion detection application performance; logistic reasons; malicious intruder tracking; node density; node proportion optimization; sensing range; sensor node distribution; transmission range; uniform-Gaussian distributed heterogeneous WSN; wireless sensor network parameter; Ad hoc networks; Communication system security; Intrusion detection; Sensors; Wireless communication; Wireless sensor networks (ID#: 15-6500)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058571&isnumber=7058466


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: INFOCOM 2015, Kowloon, Hong Kong, China

 

 
SoS Logo

International Conferences:

INFOCOM 2015

Kowloon, Hong Kong, China


The 2015 IEEE Conference on Computer Communications (INFOCOM) was held on April 26–May 1, 2015 in Kowloon, Hong Kong, China. Over 300 papers were presented at the conference on a variety of computer networking topics. The work cited here specifically relates to the Science of Security.  


He, Xiaofan; Dai, Huaiyu; Ning, Peng, “Improving Learning and Adaptation in Security Games by Exploiting Information Asymmetry,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1787–1795, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218560
Abstract: With the advancement of modern technologies, the security battle between a legitimate system (LS) and an adversary is becoming increasingly sophisticated, involving complex interactions in unknown dynamic environments. Stochastic game (SG), together with multi-agent reinforcement learning (MARL), offers a systematic framework for the study of information warfare in current and emerging cyber-physical systems. In practical security games, each player usually has only incomplete information about the opponent, which induces information asymmetry. This work exploits information asymmetry from a new angle, considering how to exploit local information unknown to the opponent to the player's advantage. Two new MARL algorithms, termed minimax-PDS and WoLF-PDS, are proposed, which enable the LS to learn and adapt faster in dynamic environments by exploiting its private local information. The proposed algorithms are provably convergent and rational, respectively. Also, numerical results are presented to show their effectiveness through two concrete anti-jamming examples.
Keywords: Computers; Conferences; Games; Heuristic algorithms; Jamming; Security; Sensors (ID#: 15-6719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218560&isnumber=7218353

 

Hu, Pengfei; Li, Hongxing; Fu, Hao; Cansever, Derya; Mohapatra, Prasant, “Dynamic Defense Strategy Against Advanced Persistent Threat with Insiders,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 747–755, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218444
Abstract: The landscape of cyber security has been reformed dramatically by the recently emerging Advanced Persistent Threat (APT). It is uniquely featured by the stealthy, continuous, sophisticated and well-funded attack process for long-term malicious gain, which render the current defense mechanisms inapplicable. A novel design of defense strategy, continuously combating APT in a long time-span with imperfect/incomplete information on attacker's actions, is urgently needed. The challenge is even more escalated when APT is coupled with the insider threat (a major threat in cyber-security), where insiders could trade valuable information to APT attacker for monetary gains. The interplay among the defender, APT attacker and insiders should be judiciously studied to shed insights on a more secure defense system. In this paper, we consider the joint threats from APT attacker and the insiders, and characterize the fore-mentioned interplay as a two-layer game model, i.e., a defense/attack game between defender and APT attacker and an information-trading game among insiders. Through rigorous analysis, we identify the best response strategies for each player and prove the existence of Nash Equilibrium for both games. Extensive numerical study further verifies our analytic results and examines the impact of different system configurations on the achievable security level.
Keywords: Computer security; Computers; Cost function; Games; Joints; Nash equilibrium (ID#: 15-6720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218444&isnumber=7218353

 

Hao, Zijiang; Tang, Yutao; Zhang, Yifan; Novak, Ed; Carter, Nancy; Li, Qun, “SMOC: A Secure Mobile Cloud Computing Platform,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2668–2676, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218658
Abstract: Mobile devices are now ubiquitous in the modern world. In this paper, we propose a novel and practical mobile-cloud platform for smart mobile devices. Our platform allows users to run the entire mobile device operating system and arbitrary applications on a cloud-based virtual machine. It has two design fundamentals. First, applications can freely migrate between the user's mobile device and a backend cloud server. We design a file system extension to enable this feature, so users can freely choose to run their applications either in the cloud (for high security guarantees), or on their local mobile device (for better user experience). Second, in order to protect user data on the smart mobile device, we leverage hardware virtualization technology, which isolates the data from the local mobile device operating system. We have implemented a prototype of our platform using off-the-shelf hardware, and performed an extensive evaluation of it. We show that our platform is efficient, practical, and secure.
Keywords: Hardware; Keyboards; Mobile communication; Mobile handsets; Security; Virtual machine monitors; Virtualization
(ID#: 15-6721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218658&isnumber=7218353

 

Chen, Fei; Xiang, Tao; Yang, Yuanyuan; Wang, Cong; Zhang, Shengyu, “Secure Cloud Storage Hits Distributed String Equality Checking: More Efficient, Conceptually Simpler, and Provably Secure,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2389–2397, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218627
Abstract: Cloud storage has gained a remarkable success in recent years with an increasing number of consumers and enterprises outsourcing their data to the cloud. To assure the availability and integrity of the outsourced data, several protocols have been proposed to audit cloud storage. Despite the formally guaranteed security, the constructions employed heavy cryptographic operations as well as advanced concepts (e.g., bilinear maps over elliptic curves and digital signatures), and thus are inefficient to admit wide applicability in practice. In this paper, we design a novel secure cloud storage protocol, which is conceptually and technically simpler and significantly more efficient than previous constructions. Inspired by a classic string equality checking protocol in distributed computing, our protocol uses only basic integer arithmetic (without advanced techniques and concepts). As simple as the protocol is, it supports both randomized and deterministic auditing to fit different applications. We further extend the proposed protocol to support data dynamics, i.e., adding, deleting and modifying data, using a novel technique. As a further contribution, we find a systematic way to design secure cloud storage protocols based on verifiable computation protocols. Theoretical and experimental analyses validate the efficacy of our protocol.
Keywords: Cloud computing; Computational modeling; Computers; Conferences; Protocols; Secure storage; Security
(ID#: 15-6722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218627&isnumber=7218353

 

Sun, Wenhai; Liu, Xuefeng; Lou, Wenjing; Hou, Y.Thomas; Li, Hui, “Catch You if You Lie to Me: Efficient Verifiable Conjunctive Keyword Search over Large Dynamic Encrypted Cloud Data,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2110–2118, April 26 2015-May 1 2015. doi:10.1109/INFOCOM.2015.7218596
Abstract: Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.
Keywords: Conferences; Cryptography; Indexes; Keyword search; Polynomials; Servers (ID#: 15-6723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218596&isnumber=7218353

 

Chen, Zhili; Huang, Liusheng; Chen, Lin, “ITSEC: An Information-Theoretically Secure Framework for Truthful Spectrum Auctions,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2065–2073, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218591
Abstract: Truthful auctions make bidders reveal their true valuations for goods to maximize their utilities. Currently, almost all spectrum auction designs are required to be truthful. However, disclosure of one's true value causes numerous security vulnerabilities. Secure spectrum auctions are thus called for to address such information leakage. Previous secure auctions either did not achieve enough security, or were very slow due to heavy computation and communication overhead. In this paper, inspired by the idea of secret sharing, we design an information-theoretically secure framework (ITSEC) for truthful spectrum auctions. As a distinguished feature, ITSEC not only achieves information-theoretic security for spectrum auction protocols in the sense of cryptography, but also greatly reduces both computation and communication overhead by ensuring security without using any encryption/description algorithm. To our knowledge, ITSEC is the first information-theoretically secure framework for truthful spectrum auctions in the presence of semi-honest adversaries. We also design and implement circuits for both single-sided and double spectrum auctions under the ITSEC framework. Extensive experimental results demonstrate that ITSEC achieves comparable performance in terms of computation with respect to spectrum auction mechanisms without any security measure, and incurs only limited communication overhead.
Keywords: Conferences; Cryptography; Logic gates; Privacy; Protocols; Random variables (ID#: 15-6724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218591&isnumber=7218353

 

Ma, Jiefei; Le, Franck; Russo, Alessandra; Lobo, Jorge, “Detecting Distributed Signature-Based Intrusion: The Case of Multi-Path Routing Attacks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 558–566, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218423
Abstract: Signature-based network intrusion detection systems (S-IDS) have become an important security tool in the protection of an organisation's infrastructure against external intruders. By analysing network traffic, S-IDS' detect network intrusions. An organisation may deploy one or multiple S-IDS', each working independently with the assumption that it can monitor all packets of a given flow to detect intrusion signatures. However, emerging technologies (e.g., Multi-Path TCP) violate this assumption, as traffic can be concurrently sent across different paths (e.g., WiFi, Cellular) to boost network performance. Attackers may exploit this capability and split malicious payloads across multiple paths to evade traditional signature-based network intrusion detection systems. Although multiple monitors may be deployed, none of them has the full coverage of the network traffic to detect the intrusion signature. In this paper, we formalise this distributed signature-based intrusion detection problem as an asynchronous online exact string matching problem, and propose an algorithm for it. To demonstrate its effectiveness we conducted comprehensive experiments. Our results show that the behaviour of our algorithm depends only on the packet arrival rate: delay in detecting the signature grows linearly with respect to the packet arrival rate and with small communication overhead.
Keywords: Automata; Computers; Conferences; Intrusion detection; Monitoring; Payloads; Synchronization (ID#: 15-6725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218423&isnumber=7218353

 

Xu, Qiang; Liao, Yong; Miskovic, Stanislav; Mao, Z. Morley; Baldi, Mario; Nucci, Antonio; Andrews, Thomas, “Automatic Generation of Mobile App Signatures from Traffic Observations,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1481–1489, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218526
Abstract: There are network management, traffic engineering, and security practices adopted in today's networking that rely on the knowledge about what applications' traffic is passing through the networks. These practices might fail with mobile apps whose identity remains hidden in generic HTTP traffic. The main reason is that unlike traditional applications, most mobile apps do not use specific protocols or IP ports with distinctive features. Many enterprises and service providers are in a great need of regaining control over their networks that increasingly carry mobile traffic. In this paper we propose FLOWR, a system that automatically identifies mobile apps by continually learning the apps' distinguishing features via traffic analysis. FLOWR focuses solely on key-value pairs in HTTP headers and intelligently identifies the pairs suitable for app signatures. Our system employs a custom supervised learning approach that leverages a very limited knowledge of app-signature seeds and autonomously grows its capacity for app identification. The approach is motivated by a simple but effective hypothesis that unknown app-identifying features should co-occur with the known signatures. Our experimental results show a significant growth in flow identification coverage provided by FLOWR. Specifically, we show that FLOWR can achieve identification of 86–95% of flows related to their generating apps.
Keywords: Computers; Conferences; FLOWR; IP networks; Mobile communication; Mobile computing; Protocols; Web services
(ID#: 15-6726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218526&isnumber=7218353

 

Zhang, Chao; Niknami, Mehrdad; Chen, Kevin Zhijie; Song, Chengyu; Chen, Zhaofeng; Song, Dawn, “JITScope: Protecting Web Users from Control-Flow Hijacking Attacks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 567–575, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218424
Abstract: Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims' systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
Keywords: Browsers; Engines; Instruments; JIT compiled code; JIT optimization technique; JITScope; Layout; Runtime; Safety; Security (ID#: 15-6727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218424&isnumber=7218353

 

Lu, Zhuo; Sagduyu, Yalin E.; Li, Jason H., “Queuing the Trust: Secure Backpressure Algorithm Against Insider Threats
in Wireless Networks,”
in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 253–261,
April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218389
Abstract: The backpressure algorithm is known to provide throughput optimality in routing and scheduling decisions for multi-hop networks with dynamic traffic. The essential assumption in the backpressure algorithm is that all nodes are benign and obey the algorithm rules governing the information exchange and underlying optimization needs. Nonetheless, such an assumption does not always hold in realistic scenarios, especially in the presence of security attacks with intent to disrupt network operations. In this paper, we propose a novel mechanism, called virtual trust queuing, to protect backpressure algorithm based routing and scheduling protocols from various insider threats. Our objective is not to design yet another trust-based routing to heuristically bargain security and performance, but to develop a generic solution with strong guarantees of attack resilience and throughput performance in the backpressure algorithm. To this end, we quantify a node's algorithm-compliance behavior over time and construct a virtual trust queue that maintains deviations from expected algorithm outcomes. We show that by jointly stabilizing the virtual trust queue and the real packet queue, the backpressure algorithm not only achieves resilience, but also sustains the throughput performance under an extensive set of security attacks.
Keywords: Algorithm design and analysis; Heuristic algorithms; Optimization; Queueing analysis; Routing; Scheduling; Throughput (ID#: 15-6728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218389&isnumber=7218353

 

Cui, Helei; Yuan, Xingliang; Wang, Cong, “Harnessing Encrypted Data in Cloud for Secure and Efficient Image Sharing
from Mobile Devices,”
in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2659–2667,
April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218657
Abstract: In storage outsourcing, highly correlated datasets can occur commonly, where the rich information buried in correlated data can be useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, the proposed design aims to save the transmission cost from mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. While the benefits are obvious, how to leverage the encrypted image datasets makes the problem particular challenging. To tackle the problem, we first propose a secure and efficient index design that allows the mobile client to securely find from the encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support the secure image reproduction inside the cloud directly from the encrypted candidate selection. We formally analyze the security strength of the design. Our experiments show that up to 90% of the transmission cost at the mobile client can be saved, while achieving all service requirements and security guarantees.
Keywords: Encryption; Feature extraction; Indexes; Mobile communication; Servers (ID#: 15-6729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218657&isnumber=7218353

 

Zhang, Kuan; Liang, Xiaohui; Lu, Rongxing; Yang, Kan; Shen, Xuemin Sherman, “Exploiting Mobile Social Behaviors for Sybil Detection,” in Computer Communications (INFOCOM), 2015 IEEE Conference on,  vol., no., pp. 271–279, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218391
Abstract: In this paper, we propose a Social-based Mobile Sybil Detection (SMSD) scheme to detect Sybil attackers from their abnormal contacts and pseudonym changing behaviors. Specifically, we first define four levels of Sybil attackers in mobile environments according to their attacking capabilities. We then exploit mobile users' contacts and their pseudonym changing behaviors to distinguish Sybil attackers from normal users. To alleviate the storage and computation burden of mobile users, the cloud server is introduced to store mobile user's contact information and to perform the Sybil detection. Furthermore, we utilize a ring structure associated with mobile user's contact signatures to resist the contact forgery by mobile users and cloud servers. In addition, investigating mobile user's contact distribution and social proximity, we propose a semi-supervised learning with Hidden Markov Model to detect the colluded mobile users. Security analysis demonstrates that the SMSD can resist the Sybil attackers from the defined four levels, and the extensive trace-driven simulation shows that the SMSD can detect these Sybil attackers with high accuracy.
Keywords: Aggregates; Computers; Hidden Markov models; Mobile communication; Mobile computing; Resists; Servers
(ID#: 15-6730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218391&isnumber=7218353

 

Wang, Bing; Song, Wei; Lou, Wenjing; Hou, Y.Thomas, “Inverted Index Based Multi-Keyword Public-Key Searchable Encryption with Strong Privacy Guarantee,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no.,
pp. 2092–2100, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218594
Abstract: With the growing awareness of data privacy, more and more cloud users choose to encrypt their sensitive data before outsourcing them to the cloud. Search over encrypted data is therefore a critical function facilitating efficient cloud data access given the high data volume that each user has to handle nowadays. Inverted index is one of the most efficient searchable index structures and has been widely adopted in plaintext search. However, securing an inverted index and its associated search schemes is not a trivial task. A major challenge exposed from the existing efforts is the difficulty to protect user's query privacy. The challenge roots on two facts: 1) the existing solutions use a deterministic trapdoor generation function for queries; and 2) once a keyword is searched, the encrypted inverted list for this keyword is revealed to the cloud server. We denote this second property in the existing solutions as one-time-only search limitation. Additionally, conjunctive multi-keyword search, which is the most common form of query nowadays, is not supported in those works. In this paper, we propose a public-key searchable encryption scheme based on the inverted index. Our scheme preserves the high search efficiency inherited from the inverted index while lifting the one-time-only search limitation of the previous solutions. Our scheme features a probabilistic trapdoor generation algorithm and protects the search pattern. In addition, our scheme supports conjunctive multi-keyword search. Compared with the existing public key based schemes that heavily rely on expensive pairing operations, our scheme is more efficient by using only multiplications and exponentiations. To meet stronger security requirements, we strengthen our scheme with an efficient oblivious transfer protocol that hides the access pattern from the cloud. The simulation results demonstrate that our scheme is suitable for practical usage with moderate overhead.
Keywords: Encryption; Indexes; Polynomials; Privacy; Public key; Servers (ID#: 15-6731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218594&isnumber=7218353

 

Salinas, Sergio; Luo, Changqing; Chen, Xuhui; Li, Pan, “Efficient Secure Outsourcing of Large-Scale Linear Systems of Equations,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1035–1043, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218476
Abstract: Solving large-scale linear systems of equations (LSEs) is one of the most common and fundamental problems in big data. But such problems are often too expensive to solve for resource-limited users. Cloud computing has been proposed as a timely, efficient, and cost-effective way of solving such computing tasks. Nevertheless, one critical concern in cloud computing is data privacy. To be more prominent, in many cases, clients's LSEs contain private data that should remain hidden from the cloud for ethical, legal, or security reasons. Many previous works on secure outsourcing of LSEs have high computational complexity. More importantly, they share a common serious problem, i.e., a huge number of external memory I/O operations. This problem has been largely neglected in the past, but in fact is of particular importance and may eventually render those outsourcing schemes impractical. In this paper, we develop an efficient and practical secure outsourcing algorithm for solving large-scale LSEs, which has both low computational complexity and low memory I/O complexity and can protect clients' privacy well. We implement our algorithm on a real-world cloud server and a laptop. We find that the proposed algorithm offers significant time savings for the client (up to 65%) compared to previous algorithms.
Keywords: Computational complexity; Computers; Outsourcing; Privacy; Random access memory; Symmetric matrices
(ID#: 15-6732)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218476&isnumber=7218353

 

Yang, Lei; Peng, Pai; Dang, Fan; Wang, Cheng; Li, Xiang-Yang; Liu, Yunhao, “Anti-Counterfeiting via Federated RFID Tags' Fingerprints and Geometric Relationships,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp.1966–1974, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218580
Abstract: RFID has been widely adopted as an effective method for anti-counterfeiting. Legacy systems based on security protocol are either too heavy to be affordable by passive tags or suffering from various protocol-layer attacks, e.g. reverse engineering, cloning, side-channel. In this work, we present a novel anti-counterfeiting system, TagPrint, using COTS RFID tags and readers. Achieving a low-cost and offline genuineness validation utilizing passive tags has been a daunting task. Our system achieves these three goals by leveraging a few of federated tags' fingerprints and geometric relationships. In TagPrint, we exploit a new kind of fingerprint, called phase fingerprint, extracted from the phase value of the backscattered signal, provided by the COTS RFID readers. To further solve the separation challenge, we devise a geometric solution to validate the genuineness. We have implemented a prototype of TagPrint using COTS RFID devices. The system has been tested extensively over 6,000 tags. The results show that our new fingerprint exhibits a good fitness of uniform distribution and the system achieves a surprising Equal Error Rate of 0.1% for anti-counterfeiting.
Keywords: Antennas; Counterfeiting; Cryptography; Fingerprint recognition; Phase measurement; Radiofrequency identification; Anti-counterfeiting; Phase fingerprint; RFID; Tag-Print (ID#: 15-6733)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218580&isnumber=7218353

 

Niu, Jianwei; Gu, Fei; Zhou, Ruogu; Xing, Guoliang; Xiang, Wei, “VINCE: Exploiting Visible Light Sensing for Smartphone-Based NFC Systems,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 2722–2730, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218664
Abstract: This paper presents VINCE — a novel visible light sensing design for smartphone-based Near Field Communication (NFC) systems. VINCE encodes information as different brightness levels of smartphone screens, while receivers capture the light signal via light sensors. In contrast to RF technologies, the direction and distance of such a Visible Light Communication (VLC) link can be easily controlled, preserving communication privacy and security. As a result, VINCE can be used in a wide range of NFC applications such as contactless payments and device pairing. We experimentally profile the impact of screen brightness levels and refresh rates of smartphones, and then use the results to guide the design of light intensity encoding scheme of VINCE. We adopt several signal processing techniques and empirically derive a model to deal with the significant variation of received light intensity caused by noises and low screen refresh rates. To improve the communication reliability, VINCE adopts a feedback-based retransmission scheme, and dynamically adjusts the number of encoding brightness levels based on the current light channel condition. We also derive an analytical model that characterizes the relation among the distance, SNR (Signal to Noise Ratio), and BER (Bit Error Rate) of VINCE. Our design and theoretical model are validated via extensive evaluations using a hardware implementation of VINCE on Android smartphones and the Arduino platform.
Keywords: Brightness; Decoding; Encoding; Receivers; Sensors; Signal to noise ratio; (ID#: 15-6734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218664&isnumber=7218353

 

Zhang, Shuo; He, Fei; Gu, Ming, “VeRV: A Temporal and Data-Concerned Verification Framework for the Vehicle Bus Systems,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1167–1175, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218491
Abstract: As a part of the international standard IEC 61375, the multifunction vehicle bus (MVB) has been used in most of the modern train control systems. It is highly desirable to check the temporal properties of the data transmitted on the bus. However, we are not aware of any published work on this problem. We proposed VeRV, the first temporal and data-concerned verification framework for the vehicle bus systems. A domain-specific language, called VeSpec, is proposed to specify the packet formats and the desired properties. The language is expressive, modular and easy to use. Given a VeSpec script, the VeRV allows automatic generation of runtime analyzer. We have applied our technique to a real tube train system and succeeded in diagnosing a real failure in this system. The industry application illustrates the effectiveness and efficiency of our technique.
Keywords: Automata; History; Java; Monitoring; Temperature measurement; Temperature sensors; Vehicles; Vehicle bus systems; domain-specific language; online monitoring; runtime verification (ID#: 15-6735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218491&isnumber=7218353

 

Niu, Ben; Li, Qinghua; Zhu, Xiaoyan; Cao, Guohong; Li, Hui, “Enhancing Privacy Through Caching in Location-Based Services,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1017–1025, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218474
Abstract: Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.
Keywords: Algorithm design and analysis; Computers; Entropy; Measurement; Mobile communication; Privacy; Servers
(ID#: 15-6736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218474&isnumber=7218353

 

Roos, Stefanie; Strufe, Thorsten, “On the Impossibility of Efficient Self-Stabilization in Virtual Overlays with Churn,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 298–306, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218394
Abstract: Virtual overlays generate topologies for greedy routing, like rings or hypercubes, on connectivity restricted networks. They have been proposed to achieve efficient content discovery in the Darknet mode of Freenet, for instance, which provides a private and secure communication platform for dissidents and whistle-blowers. Virtual overlays create tunnels between nodes with neighboring addresses in the topology. The routing performance hence is directly related to the length of the tunnels, which have to be set up and maintained at the cost of communication overhead in the absence of an underlying routing protocol. In this paper, we show the impossibility to efficiently maintain sufficiently short tunnels. Specifically, we prove that in a dynamic network either the maintenance or the routing eventually exceeds polylog cost in the number of participants. Our simulations additionally show that the length of the tunnels increases fast if standard maintenance protocols are applied. Thus, we show that virtual overlays can only offer efficient routing at the price of high maintenance costs.
Keywords: Maintenance engineering; Network topology; Random processes; Random variables; Routing; Topology; Zinc
(ID#: 15-6737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218394&isnumber=7218353



Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: Incident Management and Forensics (IMF), Germany, 2015

 

 
SoS Logo

International Conferences:

Incident Management and Forensics (IMF)

Germany, 2015


The 2015 Ninth International Conference on IT Security Incident Management & IT Forensics (IMF) was held 18-20 May 2015 at Magdeburg, Germany. Papers were presented on forensics, recent trends, memory and file system analysis, database aspects, detection of encrypted content, and response challenges in automated incident handling, mobile payment frauds, and evidence modeling.


Lösche, Ulf; Morgenstern, Maik; Pilz, Hendrik, “Platform Independent Malware Analysis Framework,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 109-113, 18-20 May 2015. doi:10.1109/IMF.2015.21
Abstract: Over the past years malicious software has evolved to a persistent threat on all major computer platforms. Due to the high number of new threats which are released every day security researchers have developed automatic systems to analyze and classify unknown pieces of software. While these techniques are technically mature on the Windows platform they still have to be improved on many other platforms such as Linux and Mac OS X. As the process of malware analysis is very similar on all platforms we have developed a platform independent framework to easily implement malware analysis on a new platform. This paper will cover our experience with malware analysis and we will show our generic approach, which can be applied on any platform.
Keywords: Androids; Humanoid robots; Linux; Malware; Monitoring; Operating systems; Virtual machine monitors; Android; Dynamic analysis; Forensic; Linux; Mac OS X; Malware analysis; Platform independent; Sandbox; Virtualization; Windows
(ID#: 15-6738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195811&isnumber=7195793

 

Thurner, Simon; Grun, Marcel; Schmitt, Sven; Baier, Harald, “Improving the Detection of Encrypted Data on Storage Devices,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 26-39, 18-20 May 2015. doi:10.1109/IMF.2015.12
Abstract: The detection of persistently stored encrypted data plays an increasingly important role in digital forensics. This is especially true during live analysis of IT systems, when the encrypted data structures are temporarily decrypted in main memory and thus can be accessed as plaintext. One method commonly used to detect the presence of encrypted data on a storage device is the calculation of entropy. However, this method has a significant drawback: both random and compressed data have a very similar entropy compared to encrypted data, which yields a high false positive rate. That is why entropy is not very suitable to differentiate between these types of data. In this work we suggest both a workflow for detection of encrypted data structures on a storage device and an improved classification algorithm. The classification part of the workflow is based on statistical tests. For convenience of the investigator an important goal is to minimize the number of falsely classified unencrypted data structures (e.g. compressed data is classified as encrypted data). Our approach to achieve this goal is to combine different statistical tests. As a practical proof of concept we provide and evaluate a tool for automated analysis of storage devices that implements a multitude of statistical tests for improved detection of encrypted data, compared to both the application of only one such test and the calculation of entropy. More precisely our tool is able to reliably distinguish high-entropy file formats (i.e. DOCX, JPG, PDF, ZIP) from encrypted files (i.e. a truecrypt container).
Keywords: Ciphers; Data structures; Encryption; Entropy; Generators; Reliability; digital forensics; encryption detection; entropy; statistical tests (ID#: 15-6739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195804&isnumber=7195793

 

Schiefer, Michael, “Smart Home Definition and Security Threats,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 114-118, 18-20 May 2015. doi:10.1109/IMF.2015.17
Abstract: The home of the future should be a smart one, to support us in our daily life. Up to now only a few security incidents in that area are known. Depending on different security analyses, this fact is rather a result of the low spread of Smart Home products than the success of such systems security. Given that Smart Homes become more and more popular, we will consider current incidents and analyses to estimate potential security threats in the future. The definitions of a Smart Home drift widely apart. Thus we first need to define Smart Home for ourselves and additionally provide a way to categorize the big mass of products into smaller groups.
Keywords: Cameras; Heating; Internet; Monitoring; Security; Smart homes; Web pages; internet of things; security threats; smart home (ID#: 15-6740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195812&isnumber=7195793

 

Ossenbühl, Sven; Steinberger, Jessica; Baier, Harald, “Towards Automated Incident Handling: How to Select an Appropriate Response against a Network-Based Attack?,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 51-67, 18-20 May 2015. doi:10.1109/IMF.2015.13
Abstract: The increasing amount of network-based attacks evolved to one of the top concerns responsible for network infrastructure and service outages. In order to counteract these threats, computer networks are monitored to detect malicious traffic and initiate suitable reactions. However, initiating a suitable reaction is a process of selecting an appropriate response related to the identified network-based attack. The process of selecting a response requires taking into account the economics of an reaction e.g., risks and benefits. The literature describes several response selection models, but they are not widely adopted. In addition, these models and their evaluation are often not reproducible due to closed testing data. In this paper, we introduce a new response selection model, called REASSESS, that allows to mitigate network-based attacks by incorporating an intuitive response selection process that evaluates negative and positive impacts associated with each countermeasure. We compare REASSESS with the response selection models of IE-IRS, ADEPTS, CS-IRS, and TVA and show that REASSESS is able to select the most appropriate response to an attack in consideration of the positive and negative impacts and thus reduces the effects caused by an network-based attack. Further, we show that REASSESS is aligned to the NIST incident life cycle. We expect REASSESS to help organizations to select the most appropriate response measure against a detected network-based attack, and hence contribute to mitigate them.
Keywords: Adaptation models; Biological system modeling; Delays; Internet; NIST; Network topology; Security; automatic mitigation; cyber security; intrusion response systems; network security (ID#: 15-6741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195806&isnumber=7195793

 

Kier, Christof; Madlmayr, Gerald; Nawratil, Alexander; Schafferer, Michael; Schanes, Christian; Grechenig, Thomas, “Mobile Payment Fraud: A Practical View on the Technical Architecture and Starting Points for Forensic Analysis of New Attack Scenarios,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 68-76, 18-20 May 2015. doi:10.1109/IMF.2015.14
Abstract: As payment cards and mobile devices are equipped with Near Field Communication (NFC) technology, electronic payment transactions at physical Point of Sale (POS) environments are changing. Payment transactions do not require the customer to insert their card into a slot of the payment terminal. The customer is able to simply swipe the payment card or mobile phone in front of a dedicated zone of the terminal to initiate a payment transaction. Secure Elements (SEs) in mobile phones and payment cards with NFC should keep sensitive application data in a safe place to protect it from abuse by attackers. Although hardware and the operating system of such a chip has to go through an intensive process of security testing, the current integration of such a chip in mobile phones easily allows attackers to access the information stored. In the following paper we present the implementation of two different proof-of-concept attacks. Out of the analysis of the attack scenarios, we propose various starting points for the forensic analysis in order to detect such fraudulent transactions. The presented concept should lead to fewer fraudulent transactions as well as protected evidence in case of fraud.
Keywords: Credit cards; Google; ISO Standards; Relays; Security; Smart phones; EMV Payment; Mobile Payment; NFC Transaction; Payment Fraud (ID#: 15-6742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195807&isnumber=7195793

 

Bellin, Knut; Creutzburg, Reiner, “Conception of a Master Course for IT and Media Forensics Part II: Android Forensics,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 96-105, 18-20 May 2015. doi:10.1109/IMF.2015.19
Abstract: The growth of Android in the mobile sector and the interest to investigate these devices from a forensic point of view has rapidly increased. Many companies have security problems with mobile devices in their own IT infrastructure. To respond to these incidents, it is important to have professional trained staff. Furthermore, it is necessary to further train their existing employees in the practical applications of mobile forensics owing to the fact that a lot of companies are trusted with very sensitive data. Inspired by these facts, this paper addresses training approaches and practical exercises to investigate Android mobile devices.
Keywords: Androids; Forensics; Humanoid robots; Mobile communication; Oxygen; Smart phones; mobile forensics training education Android small scale digital device (ID#: 15-6743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195810&isnumber=7195793

 

Kiltz, Stefan; Dittmann, Jana; Vielhauer, Claus, “Supporting Forensic Design — A Course Profile to Teach Forensics,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 85-95, 18-20 May 2015. doi:10.1109/IMF.2015.16
Abstract: There is a growing demand for experts with a dedicated knowledge of forensics, especially in the domain of digital and digitised forensics, besides a general shortage of teaching of digital forensics. Further, there is prominent lack of standardisation in designing a curriculum [18]. We address this by offering the profile ForensikDesign@Informatik [23] to the bachelor's degree at university level. By teaching digital and digitised forensics, we propose a model-based approach combining the practitioners and the computer scientist's view [19], also to address the standardisation issue. We identify three main application areas: teaching conventional digital forensic examinations using existing tools and methods following the model-based approach, the design of new forensic tools and methods and the system design to achieve a desired degree of forensic readiness in the conflict field of a degree of anonymity. The last two application areas, we believe, also justify teaching at university level. We set an international focus, and highlight the science part of forensic sciences. Selected law aspects are addressed both for motivational and comparative purposes. We implement different teaching strategies and provide dedicated resources (technical, organisational and personnel). Finally, we outline the two options for the profile ForensikDesign@Informatik, depending on the effort of commitment by the students.
Keywords: Computational modeling; Data models; Digital forensics; Documentation; Education; Security; Existing and planned teaching programs with goals and concepts; basic and emerging trends to provide education; from theory to practical approaches (ID#: 15-6744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195809&isnumber=7195793

 

Ramisch, Felix; Rieger, Martin, “Recovery of SQLite Data Using Expired Indexes,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 19-25, 18-20 May 2015. doi:10.1109/IMF.2015.11
Abstract: SQLite databases have tremendous forensic potential. In addition to active data, expired data remain in the database file, if the option secure delete is not applied. Tests of available forensic tools show, that the indexes were not considered, although they may complete the recovery of the table structures. Algorithms for their recovery and combination with each other or with table data are worked out. A new tool, SQLite Index Recovery, was developed for this study. The use with test data and data of Apple Mail shows, that the recovery of indexes is possible and enriches the recovery of ordinary table data.
Keywords: File systems; Forensics; Indexes; Metadata; Oxygen; Postal services; Apple Mail; SQLite; database; expired data; forensic tool; free block; index recovery (ID#: 15-6745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195803&isnumber=7195793

 

Gruhn, Michael, “Windows NT pagefile.sys Virtual Memory Analysis,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 3-18, 18-20 May 2015. doi:10.1109/IMF.2015.10
Abstract: As hard disk encryption, RAM disks, persistent data avoidance technology and memory resident malware become more widespread, memory analysis becomes more important. In order to provide more virtual memory than is actually physical present on a system, an operating system may transfer frames of memory to a page file on persistent storage. Current memory analysis software does not incorporate such page files and thus misses important information. We therefore present a detailed analysis of Windows NT paging. We use dynamic gray-box analysis, in which we place known data into virtual memory and examine where it is mapped to, in either the physical memory or the page file, and cross-reference these findings with the Windows NT Research Kernel source code. We demonstrate how to decode the non-present page table entries, and accurately reconstruct the complete virtual memory space, including non-present memory pages on Windows NT systems using 32-bit, PAE or IA32e paging. Our analysis approach can be used to analyze other operating systems as well.
Keywords: Forensics; Hardware; Kernel; Random access memory; Resource management; Digital Forensics; Pagefile Analysis; Virtual Memory Analysis; Windows NT Paging (ID#: 15-6746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195802&isnumber=7195793

 

Dewald, Andreas, “Characteristic Evidence, Counter Evidence and Reconstruction Problems in Forensic Computing,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 77-82, 18-20 May 2015. doi:10.1109/IMF.2015.15
Abstract: Historically, forensic computing (as digital forensics) developed pragmatically, driven by specific technical needs. Indeed, in comparison with other forensic sciences the field still is rather immature and has many deficits, such as the unclear terminology used in court. In this paper, we introduce notions of (digital) evidence, characteristic evidence, and (characteristic) counter evidence, as well as the definitions of two fundamental forensic reconstruction problems. We show the relation of the observability of the different types of evidence to the solvability of those problems. By doing this, we wish to exemplify the usefulness of formalization in the establishment of a precise terminology. While this will not replace all terminological shortcomings, it (1) may provide the basis for a better understanding between experts, and (2) helps to understand the significance of different types of digital evidence to answer questions in an investigation.
Keywords: Computational modeling; Computers; Digital forensics; Electronic mail; Hard disks; Radiation detectors; characteristic evidence; counter evidence; digital forensics; evidence; reconstruction; terminology (ID#: 15-6747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195808&isnumber=7195793

 

Freiling, Felix; Gruhn, Michael, “What is Essential Data in Digital Forensic Analysis?,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 40-48, 18-20 May 2015. doi:10.1109/IMF.2015.20
Abstract: In his seminal work on file system forensic analysis, Carrier defined the notion of essential data as "those that are needed to save and retrieve files." He argues that essential data is therefore more trustworthy since it has to be correct in order for the user to use the file system. In many practical settings, however, it is unclear whether a specific piece of data is essential because either file system specifications are ambiguous or the importance of a specific data field depends on the operating system that processes the file system data. We therefore revisit Carrier's definition and show that there are two types of essential data: strong and weak. While strongly essential corresponds to Carrier's definition, weakly essential refers to application specific interpretations. We empirically show the amount of strongly and weakly essential data in DOS/MBR and GPT partition systems, thereby complementing and extending Carrier's findings.
Keywords: Computers; Data structures; Digital forensics; Metadata; Operating systems; Standards; file system; forensic investigations; operating systems (ID#: 15-6748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195805&isnumber=7195793

 

Merkel, Ronny, “Latent Fingerprint Aging from a Hyperspectral Perspective: First Qualitative Degradation Studies Using UV/VIS Spectroscopy,” in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, vol., no., pp. 121-135, 18-20 May 2015. doi:10.1109/IMF.2015.18
Abstract: Latent print age estimation is an important topic in the emerging field of digitized crime scene forensics. While several capturing devices have recently been studied towards this goal, hyperspectral imaging in the UV/VIS (ultraviolet and visible light) range of the electromagnetic spectrum has not been investigated so far. Addressing this research gap, a first qualitative evaluation on the aging behavior of 30 latent print time series from 6 different donors is conducted, utilizing an optical reflection spectrometer. Results show more unpredictable aging tendencies in the ultraviolet spectral range, whereas a general logarithmic trend from prior work (using non-spectral capturing devices) is confirmed for the visible light band. Furthermore, a different behavior of eccrine and sebaceous print components is found, especially in the ultraviolet band, where sebaceous components seem to become reflective to the emitted radiation and might furthermore be utilized for studying longer aging periods in contrast to eccrine prints. Overall, the combined degradation information of the ultraviolet and the visible light band seem to provide the most reliable results for measuring a reproducible aging trend, serving as a potential opportunity to address the strong influence of different sweat compositions on the aging behavior of latent prints.
Keywords: Aging; Degradation; Estimation; Fingerprint recognition; Hyperspectral imaging; Lipidomics; Optical surface waves; UV/VIS spectroscopy; age estimation; digitized crime scene forensics; eccrine vs. sebaceous; hyperspectral imaging; latent fingerprints (ID#: 15-6749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195813&isnumber=7195793
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: NSysS 2015, Bangladesh

 

 
SoS Logo

International Conferences:

NSysS 2015, Bangladesh


The 2015 International Conference on Networking Systems and Security (NSysS) was held in Dhaka, Bangladesh on 5-7 January 2015. Research papers on computer networks, networking systems, and security were presented. The cited works are the ones most related to Science of Security.


Ahmed, Shamir; Rizvi, A.S.M.; Mansur, Rifat Sabbir; Amin, Md. Rafatul; Islam, A.B.M. Alim Al, “User Identification Through Usage Analysis of Electronic Devices,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp.1-6, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043518
Abstract: Different aspects of usage of electronic devices significantly vary person to person, and therefore, rigorous usage analysis exhibits its prospect in identifying a user in road to secure the devices. Different state-of-the-art approaches have investigated different aspects of the usage, such as typing speed and dwelling time, in isolation for identifying a user. However, investigation of multiple aspects of the usage in combination is yet to be focused in the literature. Therefore, this paper, we investigate multiple aspects of usage in combination to identify a user. We perform the investigation over real users through letting them interact with an Android application, which we develop specifically for the investigation. Our investigation reveals a key finding considering multiple aspects of usage in combination provides improved performance in identifying a user. We get this improved performance up to a certain number of aspects of usage being considered in the identification task.
Keywords: Android (operating system); authorisation; graphical user interfaces; Android application; device security; dwelling time; electronic device usage analysis; performance improvement; typing speed; user identification task; Clustering algorithms; Measurement; Mobile handsets; Presses; Pressing; Security; Standards (ID#: 15-6501)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043518&isnumber=7042935

 

Akter, M.; Rahman, M.O.; Islam, M.N.; Habib, M.A., “Incremental Clustering-Based Object Tracking in Wireless Sensor Networks,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp.1-6, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043534
Abstract: Emerging significance of moving object tracking has been actively pursued in the Wireless Sensor Network (WSN) community for the past decade. As a consequence, a number of methods from different angle of assessment have been developed while relatively satisfying performance. Amongst those, clustering based object tracking has shown significant results, which in term provides the network to be scalable and energy efficient for large-scale WSNs. As of now, static cluster based object tracking is the most common approach for large-scale WSN. However, as static clusters are restricted to share information globally, tracking can be lost at the boundary region of static clusters. In this paper, an Incremental Clustering Algorithm is proposed in conjunction with Static Clustering Technique to track an object consistently throughout the network solving boundary problem. The proposed research follows a Gaussian Adaptive Resonance Theory (GART) based Incremental Clustering that creates and updates clusters incrementally to incorporate incessant motion pattern without defiling the previously learned clusters. The objective of this research is to continue tracking at the boundary region in an energy-efficient way as well as to ensure robust and consistent object tracking throughout the network. The network lifetime performance metric has shown significant improvements for Incremental Static Clustering at the boundary regions than that of existing clustering techniques.
Keywords: object tracking; wireless sensor networks; GART based incremental clustering; Gaussian adaptive resonance theory; WSN; clustering based object tracking; incremental clustering algorithm; incremental clustering-based object tracking; static clustering technique; wireless sensor networks; Algorithm design and analysis; Clustering algorithms; Energy efficiency; Heuristic algorithms; Object tracking; Wireless sensor networks; Adaptive Resonance Theory; Energy-efficiency; Incremental Clustering; Object Tracking; Wireless Sensor Networks (WSN) (ID#: 15-6502)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043534&isnumber=7042935

 

Al Islam, A.B.M.A.; Hyder, C.S.; Zubaer, K.H., “Digging the Innate Reliability of Wireless Networked Systems,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-10, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042946
Abstract: Network reliability of wireless networks exhibits a prominent impact in successful advancement of the networking paradigm. A complete understanding of the network reliability demands its in-depth analysis, which is yet to be attempted in the literature. Therefore, we present a comprehensive study on the network reliability in this paper. Our step-by-step stochastic study, from node-level to network-level reliability, reveals a novel finding: the network reliability of a wireless network follows the Gaussian distribution in general. We validate the finding through exhaustive numerical simulation and ns-2 simulation.
Keywords: Gaussian distribution; numerical analysis; radio networks; stochastic processes; telecommunication network reliability; Gaussian distribution; network reliability; ns-2 simulation; numerical simulation; stochastic study; wireless networked systems; wireless networks; Ad hoc networks; Batteries; Numerical simulation; Reliability; Shape; Weibull distribution; Wireless networks (ID#: 15-6503)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042946&isnumber=7042935

 

Khan, F.H.; Ali, M.E.; Dev, H., “A Hierarchical Approach for Identifying User Activity Patterns from Mobile Phone Call Detail Records,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-6, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043535
Abstract: With the increasing use of mobile devices, now it is possible to collect different data about the day-to-day activities of personal life of the user. Call Detail Record (CDR) is the available dataset at large-scale, as they are already constantly collected by the mobile operator mostly for billing purpose. By examining this data it is possible to analyze the activities of the people in urban areas and discover the human behavioral patterns of their daily life. These datasets can be used for many applications that vary from urban and transportation planning to predictive analytics of human behavior. In our research work, we have proposed a hierarchical analytical model where this CDR Dataset is used to find facts on the daily life activities of urban users in multiple layers. In our model, only the raw CDR data are used as the input in the initial layer and the outputs from each consecutive layer is used as new input combined with the original CDR data in the next layers to find more detailed facts, e.g., traffic density in different areas in working days and holidays. So, the output in each layer is dependent on the results of the previous layers. This model utilized the CDR Dataset of one month collected from the Dhaka city, which is one of the most densely populated cities of the world. So, our main focus of this research work is to explore the usability of these types of dataset for innovative applications, such as urban planning, traffic monitoring and prediction, in a fashion more appropriate for densely populated areas of developing countries.
Keywords: mobile handsets; telecommunication network planning; Dhaka city; mobile devices; mobile operator; mobile phone call detail records; traffic monitoring; transportation planning; urban planning; Analytical models; Cities and towns; Data models; Employment; Mobile handsets; Poles and towers; Transportation (ID#: 15-6504)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043535&isnumber=7042935

 

Ferdous, S.M.; Rahman, M.S., “A Metaheuristic Approach for Application Partitioning in Mobile System,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-6, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043520
Abstract: Mobile devices such as smartphones are extremely popular now. In spite of their huge popularity, the computational ability of mobile devices is still low. Computational offloading is a way to transfer some of the heavy computational tasks to server(cloud) so that the efficiency and usability of the system increases. In this paper, we have developed a metaheuristic approach for application partitioning to maximize throughput and performance. Preliminary experiment suggest that our approach is better than the traditional all cloud and all mobile approach.
Keywords: cloud computing; mobile computing; optimisation; smart phones; application partitioning; computational offloading; computational tasks transfer; metaheuristic approach; mobile devices; mobile system; performance maximization; smartphones; throughput maximization; Computers; Mobile communication; Mobile computing; Mobile handsets; Partitioning algorithms; Servers; Throughput (ID#: 15-6505)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043520&isnumber=7042935

 

Zohra, F.T.; Rahman, A., “Mathematical Analysis of Self-Pruning and a New Dynamic Probabilistic Broadcast for MANETs,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp.1-9, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042947
Abstract: Self-pruning broadcasting algorithm exploits neighbor knowledge to reduce redundant retransmissions in mobile ad hoc wireless networks (MANETs). Although in self-pruning, only a subset of nodes forward the message based on certain forwarding rule, it belongs to one of the reliable broadcasting algorithm category where a broadcast message is guaranteed (at least algorithmically) to reach all the nodes in the network. In this paper, we develop an analytical model to determine expected number of forwarding nodes required to complete a broadcast in self-pruning algorithm. The derived expression is a function of various network parameters (such as, network density and distance between nodes) and radio transceiver parameters (such as transmission range). Moreover, the developed mathematical expression provides us a better understanding of the highly complex packet forwarding pattern of self-pruning algorithm and valuable insight to design a new broadcasting heuristic. The proposed new heuristic is a dynamic probabilistic broadcast where rebroadcast probability of each node is dynamically determined from a developed mathematical expression. Extensive simulation experiments have been conducted to validate the accuracy of the analytical model, as well as, to evaluate the efficiency of the proposed heuristic. Performance analysis shows that the proposed heuristic outperforms the static probabilistic broadcasting algorithm and an existing solution proposed by Bahadili.
Keywords: electronic messaging; mobile ad hoc networks; probability; radio transceivers; redundancy; telecommunication network reliability; MANET; complex packet forwarding pattern; dynamic probabilistic broadcasting algorithm; mathematical expression analysis; message forwarding; mobile ad hoc wireless network; radio transceiver parameter; rebroadcast probability; self-pruning broadcasting algorithm reliability; static probabilistic broadcasting algorithm; Ad hoc networks; Broadcasting; Equations; Heuristic algorithms; Mathematical model; Probabilistic logic; Protocols (ID#: 15-6506)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042947&isnumber=7042935

 

Ahmad, S.; Alam, K.M.R.; Rahman, H.; Tamura, S., “A Comparison Between Symmetric and Asymmetric Key Encryption Algorithm Based Decryption Mixnets,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-5, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043532
Abstract: This paper presents a comparison between symmetric and asymmetric key encryption algorithm based decryption mixnets through simulation. Mix-servers involved in a decryption mixnet receive independently and repeatedly encrypted messages as their input, then successively decrypt and shuffle them to generate a new altered output from which finally the messages are regained. Thus mixnets confirm unlinkability and anonymity between senders and the receiver of messages. Both symmetric (e.g. onetime pad, AES) and asymmetric (e.g. RSA and ElGamal cryptosystems) key encryption algorithms can be exploited to accomplish decryption mixnets. This paper evaluates both symmetric (e.g. ESEBM: enhanced symmetric key encryption based mixnet) and asymmetric (e.g. RSA and ElGamal based) key encryption algorithm based decryption mixnets. Here they are evaluated based on several criteria such as: the number of messages traversing through the mixnet, the number of mix-servers involved in the mixnet and the key length of the underlying cryptosystem. Finally mixnets are compared on the basis of the computation time requirement for the above mentioned criteria while sending messages anonymously.
Keywords: electronic messaging; message authentication; public key cryptography; AES; ElGamal based decryption mixnet; RSA based decryption mixnet; asymmetric key encryption algorithm based decryption mixnet; message encryption; message sending; onetime pad; symmetric key encryption algorithm based decryption mixnet; Algorithm design and analysis; Encryption; Generators; Public key; Receivers; Servers; Anonymity; ElGamal; Mixnet; Privacy; Protocol; RSA; Symmetric key encryption algorithm (ID#: 15-6507)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043532&isnumber=7042935

 

Sayeed, S.D.; Hasan, M.S.; Rahman, M.S., “Measuring Topological Robustness of Scale-Free Networks Using Biconnected Components,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-6, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042945
Abstract: Models of complex networks are dependent on various properties of networks like connectivity, accessibility, efficiency, robustness, degree distribution etc. Network robustness is a parameter that reflects attack tolerance of a network in terms of connectivity. In this paper we have tried to measure the robustness of a network in such a way that gives a better idea of both stability and reliability of a network. In some previous works, the existence of a giant connected component is considered as an indicator of structural robustness of the entire system. In this paper we show that the size of a largest biconnected component can be a better parameter for measurement of robustness of a complex network. Our experimental study exhibits that scale-free networks are more vulnerable to sustained targeted attacks and more resilient to random failures.
Keywords: complex networks; network theory (graphs); random processes; reliability; stability; biconnected component; complex networks; giant connected component; network robustness measure; random failures; reliability; scale-free networks; stability; structural robustness; topological robustness measure; Artificial neural networks; Bridges; Complex networks; Graph theory; Robustness; Size measurement (ID#: 15-6508)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042945&isnumber=7042935

 

Nur, F.N.; Sharmin, S.; Razzaque, M.A.; Islam, M.S., “A Duty Cycle Directional MAC Protocol for Wireless Sensor Networks,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-9, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042950
Abstract: The directional transmission and reception of data packets in sensor networks minimize the interference and thereby increase the network throughput, and thus the Directional Sensor Networks (DSN) are getting popularity. However, the use of directional antenna has introduced new problems in designing the medium access control (MAC) protocol in DSNs including the synchonizaiton of antenna direction of a pair of sender-receiver. In this paper, we have developed a duty cycle MAC protocol for DSNs, namely DCD-MAC, that synchronizes each pair of parent-child nodes and schedules their transmissions in such a way that transmission from child nodes minimizes the collision and the nodes are awake only when they have transmission-reception activities. The proposed DCD-MAC is fully distributed and it exploits only localized information to ensure weighted share of the transmission slots among the child nodes. We perform extensive simulations to study the performances of DCD-MAC and the results show that our protocol outperforms a state-of-the-art directional MAC protocol in terms of throughput and network lifetime.
Keywords: access protocols; directive antennas; radiofrequency interference; wireless sensor networks; MAC protocol; directional antenna; directional sensor networks; directional transmission; interference; medium access control protocol; Data transfer; Directional antennas; Media Access Protocol; Resource management; Synchronization; Wireless sensor networks (ID#: 15-6509)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042950&isnumber=7042935

 

Sadat, N.; Mohiuddin, M.T.; Uddin, Y.S., “On Bounded Message Replication in Delay Tolerant Networks,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-10, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042952
Abstract: Delay tolerant networks (DTN), are wireless networks in which at any given time instance, the probability that there is an end-to-end path from a source to a destination is low. So, the conventional solutions do not generally work in DTNs because they assume that the network is stable most of the time and failures of links between nodes are infrequent. Therefore, store-carry-and-forward paradigm is used in routing of messages in DTNs. To deal with DTNs, researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. For this reason, a family of multi-copy protocols called Spray routing, was proposed which can achieve both good delays and low transmissions. Spray routing algorithms generate only a small, carefully chosen number of copies to ensure that the total number of transmissions is small and controlled. Spray and Wait sprays a number of copies into the network, and then waits till one of these nodes meets the destination. In this paper, we propose a set of spraying heuristics that dictates how replicas are shared among nodes. These heuristics are based on delivery probabilities derived from contact histories.
Keywords: delay tolerant networks; electronic messaging; probability; radio links; radio networks; routing protocols; telecommunication network reliability; DTN spraying heuristics; bounded message replication; delay tolerant network link failure; flooding-based routing scheme; multicopy protocol; spray routing protocol; store carry and forward paradigm; wireless network probability; Binary trees; Delays; History; Probabilistic logic; Routing; Routing protocols; Spraying; Delay tolerant network; Spray and Wait; routing protocol (ID#: 15-6510)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042952&isnumber=7042935

 

Zaman, M.; Siddiqui, T.; Amin, M.R.; Hossain, M.S., “Malware Detection in Android by Network Traffic Analysis,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-5, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043530
Abstract: A common behavior of mobile malware is transferring sensitive information of the cell phone user to malicious remote servers. In this paper, we describe and demonstrate in full detail, a method for detecting malware based on this behavior. For this, we first create an App-URL table that logs all attempts made by all applications to communicate with remote servers. Each entry in this log preserves the application id and the URI that the application contacted. From this log, with the help of a reliable and comprehensive domain blacklist, we can detect rogue applications that communicate with malicious domains. We further propose a behavioral analysis method using syscall tracing. Our work can be integrated with be behavioral analysis to build an intelligent malware detection model.
Keywords: Android (operating system); invasive software; mobile computing; program diagnostics; telecommunication traffic; App-URL table; URI; behavioral analysis method; cell phone user; domain blacklist; intelligent malware detection model; malicious remote servers; mobile malware detection; sensitive information transfer; syscall tracing; Androids; Humanoid robots; Malware; Mobile communication; Ports (Computers); Servers; Uniform resource locators; ADB; Android; Busybox; malware detection; netstat; pcap (ID#: 15-6511)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043530&isnumber=7042935

 

Tanjeem, F.; Uddin, M.Y.S.; Rahman, A.K.M.A., “Wireless Media Access Depending on Packet Size Distribution over Error-Prone Channels,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-7, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043519
Abstract: Ad Hoc Network is a decentralized type of network where wireless devices are allowed to discover each other and communicate in peer to peer fashion without involving central access points. In most ad hoc networks, nodes compete for access to shared wireless medium, often resulting in collision (interference). IEEE 802.11, a well-known standard, uses medium access control (MAC) protocol to support delivery of radio data packets for both ad hoc networks and infrastructure based network. But designing a Medium Access Control (MAC) protocol for ad hoc wireless networks is challenging, particularly when the protocol needs to achieve optimal performance both in terms of throughput and efficiency to deliver a packet. Error-prone channel has a significant impact on unsuccessful transmission probability which is often ignored by previous researches. Standard DCF (Distributed Coordination Function) operation of IEEE 802.11 enacted by binary exponential back-off (BEB) algorithm cannot differentiate collision from corruption and therefore sets forth a (time) separation between multiple nodes accessing the channel by (appropriately) adjusting contention window (CW) upon a failure. This leads to increased delay in error-prone network when nodes are not contending at all. Since packet corruption depends on bit error rate (BER) and length of packets, packet size can have significant impact on the throughput in error-prone environment. In this paper, we analyze effect of packet size in determining optimal CW to improve throughput and efficiency for error-prone networks. We propose a dynamic learning based scheme to adaptively select CW sub-range instead of whole selection range for different packet distribution. To validate our scheme extensive simulations have been done and simulation results show significant improvement in E2E delay performance.
Keywords: access protocols; ad hoc networks; error statistics; peer-to-peer computing; telecommunication congestion control; wireless LAN; wireless channels; BEB algorithm; BER; CW; DCF operation; E2E delay performance; IEEE 802.11 standard; MAC protocol; ad hoc network collision; binary exponential back-off algorithm; bit error rate; contention window; distributed coordination function; dynamic learning; error-prone channel; medium access control protocol; packet size distribution; peer to peer communication; radio data packet delivery; unsuccessful transmission probability; wireless device; wireless media access; Ad hoc networks; Delays; IEEE 802.11 Standards; Network topology; Protocols; Throughput; Wireless communication (ID#: 15-6512)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043519&isnumber=7042935

 

Yanhaona, M.N.; Prodhan, A.T.; Grimshaw, A.S., “An Agent-Based Distributed Monitoring Framework” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-10, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7043515
Abstract: In compute clusters, monitoring of infrastructure and application components is essential for performance assessment, failure detection, problem forecasting, better resource allocation, and several other reasons. Present day trends towards larger and more heterogeneous clusters, rise of virtual data-centers, and greater variability of usage suggest that we have to rethink how we do monitoring. We need solutions that will remain scalable in the face of unforeseen expansions, can work in a wide-range of environments, and be adaptable to changes of requirements. We have developed an agent-based framework for constructing such monitoring solutions. Our framework deals with all scalability and flexibility issues associated with monitoring and leaves only the use-case specific task of data generation to the specific solution. This separation of concerns provides a versatile design that enables a single monitoring solution to work in a range of environments; and, at the same time, enables a range of monitoring solutions exhibiting different behaviors to be constructed by varying the tunable parameters of the framework. This paper presents the design, implementation, and evaluation of our novel framework.
Keywords: computer centres; distributed processing; multi-agent systems; pattern clustering; system monitoring; agent-based distributed monitoring framework; application components; data generation; failure detection; heterogeneous clusters; infrastructure monitoring; performance assessment; problem forecasting; resource allocation; virtual data-centers; Fault tolerance; Heart beat; Monitoring; Quality of service; Receivers; Routing; Scalability; autonomous systems; cluster monitoring; distributed systems; flexibility; scalability (ID#: 15-6513)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043515&isnumber=7042935

 

Kabir, K.S.; Ahmad, I.; Al Amin, A.; Zaber, M.; Choudhury, T.; Talukder, B.M.S.B.; Al Islam, A.B.M.A., “Q-Nerve: Propagating Signal of a Damaged Nerve Using Quantum Networking,” in Networking Systems and Security (NSysS), 2015 International Conference on, vol., no., pp. 1-10, 5-7 Jan. 2015. doi:10.1109/NSysS.2015.7042944
Abstract: Aiding paralyzed people through using technology to transmit signals from brain to paralized part of a body has been a matter of great interest in recent times. Classical approaches in this regard still experience several limitations and sometimes become hazardous to living bodies. Besides, existing literature points out that there are many nerve signals that are not amenable to the classical approaches, however, can be amenable to quantum approaches. By addressing these two points, we propose a new system to propagate signal of a damaged nerve using quantum networking. We name our proposed system Q-Nerve. Q-Nerve exploits quantum network based artificial connection between brain and other organs to bypass a damaged nerve. Subsequently, we propose a more sophisticated version of Q-Nerve that aims to exploiting a synergy between the ability of quantum computing to accumulate neural signal and the ability of quantum networking to pass the signal instantaneously. Further, we extend the proposed system for other brain and nerve related problems that require numerous logical computations.
Keywords: medical signal detection; medical signal processing; neurophysiology; quantum computing; artificial brain-organ connection; brain-related problems; damaged nerve signal propagation; instant signal transmission; nerve signals; nerve-related problems; neural signal accumulation; paralyzed people; quantum approaches; quantum computing ability; quantum network exploitation; quantum networking; sophisticated Q-Nerve version; Measurement by laser beam; Photonics; Quantum computing; Quantum entanglement; Receivers; Surface emitting lasers (ID#: 15-6514)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042944&isnumber=7042935


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

 

 
SoS Logo

Publications of Interest

 

The Publications of Interest section contains bibliographical citations, abstracts if available, and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness to current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:15-7301)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

Adversary Models and Privacy, 2014

 

 
SoS Logo

Adversary Models and Privacy

2014


The need to understand adversarial behavior in light of new technologies is always important. Using models to understand their behavior is an important element in the Science of Security, particularly in the context of threats to privacy—data privacy, location privacy, and other forms. The research presented here was performed in 2014 and recovered on June 30, 2015.



Wei Wang; Qian Zhang, “A Stochastic Game for Privacy Preserving Context Sensing on Mobile Phone,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 2328, 2336, April 27 2014-May 2 2014. doi:10.1109/INFOCOM.2014.6848177
Abstract: The proliferation of sensor-equipped smartphones has enabled an increasing number of context-aware applications that provide personalized services based on users' contexts. However, most of these applications aggressively collect users sensing data without providing clear statements on the usage and disclosure strategies of such sensitive information, which raises severe privacy concerns and leads to some initial investigation on privacy preservation mechanisms design. While most prior studies have assumed static adversary models, we investigate the context dynamics and call attention to the existence of intelligent adversaries. In this paper, we first identify the context privacy problem with consideration of the context dynamics and malicious adversaries with capabilities of adjusting their attacking strategies, and then formulate the interactive competition between users and adversaries as a zero-sum stochastic game. In addition, we propose an efficient minimax learning algorithm to obtain the optimal defense strategy. Our evaluations on real smartphone context traces of 94 users validate the proposed algorithm.
Keywords: data privacy; learning (artificial intelligence);minimax techniques; smart phones; stochastic games; ubiquitous computing; attacking strategy; context dynamics; context privacy problem; context-aware application; disclosure strategy; intelligent adversary; interactive competition; minimax learning algorithm; mobile phone; optimal defense strategy; personalized services; privacy preservation mechanisms design; privacy preserving context sensing; sensor-equipped smartphones; static adversary model; user context; user sensing data; zero-sum stochastic game; Context; Context-aware services; Games; Privacy; Sensors; Smart phones; Stochastic processes (ID#: 15-6301)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848177&isnumber=6847911 


Ren-Hung Hwang; Fu-Hui Huang, “SocialCloaking: A Distributed Architecture for K-Anonymity Location Privacy Protection,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 247, 251, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785340
Abstract: As location information becomes commonly available in smart phones, applications of Location Based Service (LBS) has also become very popular and are widely used by smart phone users. Since the query of LBS contains user's location, it raises a privacy concern of exposure of user's location. K-anonymity is a commonly adopted technique for location privacy protection. In the literature, a centralized architecture which consists of a trusted anonymity server is widely adopted. However, this approach exhibits several apparent weaknesses, such as single point of failure, performance bottleneck, serious security threats, and not trustable to users, etc. In this paper, we re-examine the location privacy protection problem in LBS applications. We first provide an overview of the problem itself, to include types of query, privacy protection methods, adversary models, system architectures, and their related works in the literature. We then discuss the challenges of adopting a distributed architecture which does not need to set up a trusted anonymity server and propose a solution by combining unique features of structured peer-to-peer architecture and trust relationships among users of their on-line social networking relations.
Keywords: data privacy; mobile computing; query processing; social networking (online); trusted computing; K-anonymity location privacy protection; LBS query; SocialCloaking; adversary model; centralized architecture; distributed architecture; failure point; location information; location-based service; on-line social networking relation; security threat; smart phones; structured peer-to-peer architecture; system architecture; trust relationship; trusted anonymity server; user location; Computer architecture; Mobile communication; Mobile handsets; Peer-to-peer computing; Privacy; Servers; Trajectory; Distributed Anonymity Server Architecture; Location Based Service; Location Privacy; Peer-to-Peer; Social Networking (ID#: 15-6302)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785340&isnumber=6785290 


Kulkarni, S.; Saha, S.; Hockenbury, R., “Preserving Privacy in Sensor-Fog Networks,” Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for, vol., no., pp. 96, 99, 8-10 Dec. 2014. doi:10.1109/ICITST.2014.7038785
Abstract: To address the privacy-utility tradeoff associated with wireless sensor networks in general, and a smart television remote in particular, we study and test usability factors and privacy aspects associated with the current framework models of a TV remote, and port the paradigm of Fog computing to arrive at an optimal solution. A Fog node, being closer to the end-devices not only mitigates the problem of latency but also enables computationally expensive operations, which were earlier possible only at cloud-side. We explore various adversary models, which can potentially compromise our framework and suggest measures to help avoid them.
Keywords: digital television; distributed algorithms; public key cryptography; wireless sensor networks; TV remote; fog computing; privacy; public key cryptography; sensor-fog network; smart television remote; wireless sensor network; Accelerometers; Accuracy; Computational modeling; Feature extraction; Privacy; Public key cryptography; TV; fog; smart; utility (ID#: 15-6303)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7038785&isnumber=7038754 


Kui Xu; Danfeng Yao; Perez-Quinones, M.A.; Link, C.; Geller, E.S., “Role-Playing Game for Studying User Behaviors in Security: A Case Study on Email Secrecy,” Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), 2014 International Conference on, vol., no., pp.18, 26, 22-25 Oct. 2014. doi: (not provided)
Abstract: Understanding the capabilities of adversaries (e.g., how much the adversary knows about a target) is important for building strong security defenses. Computing an adversary's knowledge about a target requires new modeling techniques and experimental methods. Our work describes a quantitative analysis technique for modeling an adversary's knowledge about private information at workplace. Our technical enabler is a new emulation environment for conducting user experiments on attack behaviors. We develop a role-playing cyber game for our evaluation, where the participants take on the adversary role to launch ID theft attacks by answering challenge questions about a target. We measure an adversary's knowledge based on how well he or she answers the authentication questions about a target. We present our empirical modeling results based on the data collected from a total of 36 users.
Keywords: Internet; behavioural sciences computing; computer games; data privacy; message authentication; unsolicited e-mail; ID theft attack; email secrecy; quantitative analysis technique; role-playing cyber game; security defenses; user behavior; Authentication; Educational institutions; Electronic mail; Games; Privacy; Servers; Social network services (ID#: 15-6304)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014546&isnumber=7011734 


Nagendrakumar, S.; Aparna, R.; Ramesh, S., “A Non-Grouping Anonymity Model for Preserving Privacy in Health Data Publishing,” Science Engineering and Management Research (ICSEMR), 2014 International Conference on, vol., no., pp. 1, 6,
27-29 Nov. 2014. doi:10.1109/ICSEMR.2014.7043554
Abstract: Publishing health data may jeopardize privacy breaches, since they contain sensitive information about the individuals. Privacy preserving data publishing (PPDP) addresses the problem of revealing sensitive data when extracting the useful data. The existing privacy models are group based anonymity models. Hence, these models consider the privacy of the individual only in a group based manner. And those groups are the hunting ground for the adversaries. All data re-identification attacks are based on the group of records. The root cause behind our approach is that the k-anonymity problem can be viewed as a clustering approach. Though the k-anonymity problem does not insist on the number of clusters, it requires that each group must contain at least k-records. We propose a Non-Grouping Anonymity model; this gives a basic level of anonymization that prevents an individual being re-identified from their published data.
Keywords: data privacy; electronic publishing; medical information systems; pattern clustering; security of data; PPDP; anonymization; clustering approach; data re-identification attacks; group based anonymity model; health data publishing privacy; k-anonymity problem; nongrouping anonymity model; privacy breaches; privacy model; privacy preserving data publishing; sensitive data; sensitive information; Data models; Data privacy; Loss measurement; Privacy; Publishing; Taxonomy; Vegetation; Anonymity; Privacy in Data Publishing; data Privacy; data Utility (ID#: 15-6305)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043554&isnumber=7043537 


Tiwari, P.K.; Chaturvedi, S., “Publishing Set Valued Data via M-Privacy,” Advances in Engineering and Technology Research (ICAETR), 2014 International Conference on, vol., no., pp. 1, 6, 1-2 Aug. 2014. doi:10.1109/ICAETR.2014.7012814
Abstract: It is very important to achieve security of data in distributed databases. With increasing in the usability of distributed database security issues regarding it are also going to be more complex. M-privacy is a very effective technique which may be used to achieve security of distributed databases. Set-valued data provides huge opportunities for a variety of data mining tasks. Most of the present data publishing techniques for set-valued data are refers to horizontal division based privacy models. Differential privacy method is totally opposite to horizontal based privacy method; it provides higher privacy guarantee and it is also sovereign of an adversary's environment information and computational capability. Set-valued data have high dimensionality so not any single existing data publishing approach for differential privacy can be applied for both utility and scalability. This work provided detailed information about this new threat, and gave some assistance to resolve it. At the start we introduced the concept of m-privacy. This concept guarantees that the anonymous data will satisfies a given privacy check next to any group of up to m colluding data providers. After it we presented heuristic approach for exploiting the monotonicity of confidentiality constraints for proficiently inspecting m-privacy given a cluster of records. Next, we have presented a data provider-aware anonymization approach with adaptive m-privacy inspection strategies to guarantee high usefulness and m-privacy of anonymized data with effectiveness. Finally, we proposed secured multi-party calculation protocols for set valued data publishing with m-privacy.
Keywords: data mining; data privacy; distributed databases; adaptive m-privacy inspection strategies; anonymous data; computational capability; confidentiality constraints monotonicity; data mining tasks; data provider-aware anonymization approach; data security; distributed database security; environment information; heuristic approach; horizontal division based privacy models; privacy check; privacy guarantee; privacy method; secured multiparty calculation protocols; set-valued data publishing techniques; threat; Algorithm design and analysis; Computational modeling; Data privacy; Distributed databases; Privacy; Publishing; Taxonomy; privacy; set-valued dataset (ID#: 15-6306)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012814&isnumber=7012782 


Hui Cui; Yi Mu; Man Ho Au, “Public-Key Encryption Resilient against Linear Related-Key Attacks Revisited,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 268, 275, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.37
Abstract: Wee (PKC'12) proposed a generic public-key encryption scheme in the setting of related-key attacks. Bellare, Paterson and Thomson (Asiacrypt'12) provided a framework enabling related-key attack (RKA) secure cryptographic primitives for a class of non-linear related-key derivation functions. However, in both of their constructions, the instantiations to achieve the full (not weak) RKA security are given under the scenario regarding the private key composed of single element. In other words, each element of the private key shares the same modification. However, this is impractical in real world. In this paper, we concentrate on the security of public-key encryption schemes under linear related-key attacks in the setting of multielement private keys (that is, the private key is composed of more than one element), where an adversary is allowed to tamper any part of this private key stored in a hardware device, and subsequently observes the outcome of a public key encryption system under this targeted modified private key. We define the security model for RKA secure public-key encryption schemes as chosen-cipher text and related-key attack (CC-RKA) security, which means that a public-key encryption scheme remains secure even when an adversary is allowed to issue the decryption oracle on linear shifts of any component of the private key. After that, we present a detailed public key encryption schemes with the private key formed of several elements, of which the CC-RKA security is under the decisional BDH assumption in the standard model.
Keywords: public key cryptography; Asiacrypt12; CC-RKA security; PKC12; chosen-cipher text; decisional BDH assumption; decryption oracle; linear related-key secure cryptographic primitives; multielement private keys; nonlinear related-key derivation functions; public-key encryption; standard model; Encryption; Hardware; Identity-based encryption; Resistance; Linear related-key attack; Public-key encryption (ID#: 15-6307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011260&isnumber=7011202 


Oya, S.; Troncosoy, C.; Perez-Gonzalez, F., “Understanding the Effects of Real-World Behavior in Statistical Disclosure Attacks,” Information Forensics and Security (WIFS), 2014 IEEE International Workshop on, vol., no., pp. 72, 77, 3-5 Dec. 2014. doi:10.1109/WIFS.2014.7084306
Abstract: High-latency anonymous communication systems prevent passive eavesdroppers from inferring communicating partners with certainty. However, disclosure attacks allow an adversary to recover users' behavioral profiles when communications are persistent. Understanding how the system parameters affect the privacy of the users against such attacks is crucial. Earlier work in the area analyzes the performance of disclosure attacks in controlled scenarios, where a certain model about the users' behavior is assumed. In this paper, we analyze the profiling accuracy of one of the most efficient disclosure attack, the least squares disclosure attack, in realistic scenarios. We generate real traffic observations from datasets of different nature and find that the models considered in previous work do not fit this realistic behavior. We relax previous hypotheses on the behavior of the users and extend previous performance analyses, validating our results with real data and providing new insights into the parameters that affect the protection of the users in the real world.
Keywords: data privacy; least squares approximations; security of data; statistical analysis; high-latency anonymous communication systems; least squares disclosure attack; passive eavesdroppers; profiling accuracy; real-world behavior; statistical disclosure attacks; user privacy; Analytical models; Approximation methods; Conferences; Electronic mail; Forensics; Performance analysis; Receivers; anonymity; mixes; performance analysis (ID#: 15-6308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084306&isnumber=7084286 


Ramachandran, S.; Chithan, S.; Ravindran, S., “A Cost-Effective Approach Towards Storage and Privacy Preserving for Intermediate Data Sets in Cloud Environment,” Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, vol., no., pp. 1, 5, 10-12 April 2014. doi:10.1109/ICRTIT.2014.6996145
Abstract: Cloud computing offers pay-as-you-go model, where users only pay for their resource consumption. Many large applications utilize cloud computing. These applications generate a lot of essential intermediate results for future purpose. Storing all intermediate results is not a cost efficient approach. At the same time adversary may refer multiple intermediate result to steal the information. Likewise encrypting every part of intermediate results will increase computation cost for the user. The main aim of the system is to provide a cost effective approach for storing and providing privacy for the intermediate results.
Keywords: cloud computing; data privacy; cloud environment; computation cost; cost efficient approach; intermediate data set; pay-as-you-go model; privacy preservation; resource consumption; storage preservation; Cloud computing; Computational efficiency; Computational modeling; Data privacy; Encryption; Privacy; storage strategy (ID#: 15-6309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996145&isnumber=6996087 


Sidorov, V.; Wee Keong Ng, “Model of an Encrypted Cloud Relational Database Supporting Complex Predicates in WHERE Clause,” Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on, vol., no., pp. 667, 672, June 27 2014–July 2 2014. doi:10.1109/CLOUD.2014.94
Abstract: Even though the concept of a Database-as-a-Service (DaaS) is becoming more popular and offers significant expenditure cuts, enterprises are still reluctant to migrate their data storing and processing to the cloud. One of the reasons to that is a lack of solid security guarantees. Encrypted database is one of the major approaches to address the security of cloud data processing. However, in order to provide processing capabilities over encrypted data, multiple techniques need to be combined and adjusted to work together. This paper introduces a modular and extensible framework model of an encrypted database, which makes it possible to execute a wide range of queries, including those with complex arithmetic expressions, retaining data privacy even with an adversary gaining full access to the database server. Proposed model could be used as a basis for encrypted database systems with various functional requirements.
Keywords: cloud computing; cryptography; relational databases; DaaS; WHERE clause; cloud data processing security; cloud relational database encryption; database-as-a-service; Data models; Databases; Encryption; Numerical models; Servers; cloud database security; complex query predicates; querying encrypted data (ID#: 15-6310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6973800&isnumber=6973706 


Ajish, S.; Rajasree, R., “Secure Mail using Visual Cryptography (SMVC),” Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, vol., no., pp. 1, 7, 11-13 July 2014. doi:10.1109/ICCCNT.2014.6963148
Abstract: The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the recipient’s mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.
Keywords: chaos; data privacy; electronic mail; image processing; message authentication; public key cryptography; wavelet transforms;(2, 2) visual cryptography; CFB mode encryption; Internet standards; OpenPGP; SMVC; chaos-based image encryption algorithm; e-mail messaging; gray scale image; personal security; pretty good privacy; public key based image authentication method; receipent mail box; receiver side; secure mail using visual cryptography; transmission medium; wavelet transform; Electronic mail; Encryption; Heuristic algorithms; Receivers; Visualization; Wavelet transforms; chaos based image encryption algorithm; dynamic s-box algorithm; low frequency wavelet coefficient; pretty good privacy; visual cryptography; wavelet decomposition (ID#: 15-6311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963148&isnumber=6962988 


Pandit, A.; Polina, P.; Kumar, A., “CLOPRO: A Framework for Context Cloaking Privacy Protection,” Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, vol., no., pp. 782, 787, 7-9 April 2014. doi:10.1109/CSNT.2014.164
Abstract: Smartphones, loaded with users' personal information have become the primary computing device for many. This makes privacy an increasingly important issue. To protect the privacy of the context based service users we propose CLOPRO framework (Context Cloaking Privacy Protection) using two non-colluding servers. Each service request has three parameters: identity, context and actual query. The proposed, integrated framework achieves the identity privacy, the context privacy, and the query privacy to reduce the risk of an adversary linking all of the three parameters. The methodology used is as follows: A group of users having a similar query are clustered together, a unique id for each cluster of users is created ensuring identity privacy. The centroid of the location coordinates replaces the actual location for the users in the cluster for location privacy, the query abstraction at multiple levels ensures the query privacy. The refined query is then sent to the service provider for processing. The effectiveness of the proposed approach is established by analyzing CLOPRO privacy protection model and comparing it with the other approaches.
Keywords: data protection; pattern clustering; query processing; smart phones; CLOPRO privacy protection model; context based service users; context cloaking privacy protection; context privacy; identity privacy; noncolluding servers; query abstraction; query privacy; smartphones; Clustering algorithms; Context; Mobile communication; Mobile handsets; Privacy; Servers; Time factors; Abstraction; Anonymization; Clustering; Context Cloaking; Location Based Services; Privacy Protection (ID#: 15-6312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821506&isnumber=6821334 


Paverd, A.; Martin, A.; Brown, I., “Privacy-Enhanced Bi-Directional Communication in the Smart Grid Using Trusted Computing,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 872, 877, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trusted computing; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-6313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609 


Minami, K., “Preventing Denial-of-Request Inference Attacks in Location-Sharing Services," Mobile Computing and Ubiquitous Networking (ICMU), 2014 Seventh International Conference on, vol., no., pp. 50, 55, 6-8 Jan. 2014. doi:10.1109/ICMU.2014.6799057
Abstract: Location-sharing services (LSSs), such as Google Latitude, have been popular recently. However, location information is sensitive and access to it must be controlled carefully. We previously study an inference problem against an adversary who performs inference based on a Markov model that represents a user's mobility patterns. However, the Markov model does not capture the fact that a denial of a request enforced by the LSS itself implies that a target user is visiting some private location. In this paper, we develop an algorithmic model for representing this new class of inference attacks and conduct experiments with a real location dataset to show that threats posed by the denial-of-request inference attacks are significantly real.
Keywords: Global Positioning System; Markov processes; telecommunication security; Google Latitude; LSS; Markov model; denial-of-request inference attacks prevention; location-sharing services; private location; user mobility patterns; Global Positioning System; Hospitals; Inference algorithms; Libraries; Privacy; Trajectory (ID#: 15-6314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799057&isnumber=6799045 


Lopez, J.M.; Ruebsamen, T.; Westhoff, D., “Privacy-Friendly Cloud Audits with Somewhat Homomorphic and Searchable Encryption,” Innovations for Community Services (I4CS), 2014 14th International Conference on, vol., no., pp. 95, 103, 4-6 June 2014. doi:10.1109/I4CS.2014.6860559
Abstract: In this paper, we provide privacy enhancements for a software agent-based audit system for clouds. We also propose a general privacy enhancing cloud audit concept which, we do present based on a recently proposed framework. This framework introduces the use of audit agents for collecting digital evidence from different sources in cloud environments. Obviously, the elicitation and storage of such evidence leads to new privacy concerns of cloud customers, since it may reveal sensitive information about the utilization of cloud services. We remedy this by applying Somewhat Homomorphic Encryption (SHE) and Public-Key Searchable Encryption (PEKS) to the collection of digital evidence. By considering prominent audit event use cases we show that the amount of cleartext information provided to an evidence storing entity and subsequently to a third-party auditor can be shaped in a good balance taking into account both, i) the customers' privacy and ii) the fact that stored information may need to have probative value. We believe that the administrative domain responsible for an evidence storing database falls under the adversary model "honest-but-curious" and thus should perform query responses from the auditor with respect to a given cloud audit use case by purely performing operations on encrypted digital evidence data.
Keywords: cloud computing; public key cryptography; software agents; PEKS; SHE; cloud computing; cloud services; privacy-friendly cloud audits; public-key searchable encryption; searchable encryption; software agent-based audit system; somewhat homomorphic encryption; third-party auditor; Encryption; IP networks; Monitoring; Privacy; Public key; Audit; Cloud Computing; Computing on Encrypted Data; Evidence; Searchable Encryption; Somewhat Homomorphic Encryption (ID#: 15-6315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860559&isnumber=6860533 


Srihari Babu, D.V.; Reddy, P.C., “Secure Policy Agreement for Privacy Routing in Wireless Communication System,” Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, vol., no., pp. 739, 744, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6993057
Abstract: Security and privacy are major issues which risk the wireless communication system in successful operation employment in Adhoc and Sensor networks. Message confidentiality can be assured through successful message or content encryption, but it is very difficult to address the source location privacy. A number of schemes and polices have been proposed to protect privacy in wireless networks. Many security schemes are offered but none of those provide complete security property for data packets and control packets. This paper proposes a secure policy agreement approach for open-privacy routing in wireless communication using location-centric communication model to achieve efficient security and privacy against both Internal and External adversary pretenders. To evaluate the performance of our proposal we analyze the security, privacy and performance comparisons to alternate techniques. Simulation result shows an improvisation in proposed policy and it is more efficient and offers better privacy when compare to the prior works.
Keywords: ad hoc networks; cryptography; data privacy; telecommunication network routing; wireless channels; wireless sensor networks; ad hoc networks; complete security property; content encryption; control packets; data packets; external adversary pretenders; internal adversary pretenders; location-centric communication; message confidentiality; message encryption; open-privacy routing; secure policy agreement; sensor networks; source location privacy; successful operation employment; wireless communication system; Mobile ad hoc networks; Privacy; Public key; Routing; Routing protocols; MANET; Privacy Routing; Secure policy; Wireless Communication (ID#: 15-6316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993057&isnumber=6992918 


Zheng Jiangyu; Tan Xiaobin; Cliff, Z.; Niu Yukun; Zhu Jin, “A Cloaking-Based Approach to Protect Location Privacy in Location-Based Services,” Control Conference (CCC), 2014 33rd Chinese, vol., no., pp. 5459, 5464, 28-30 July 2014. doi:10.1109/ChiCC.2014.6895872
Abstract: With the widespread use of mobile devices, the location-based service (LBS) applications become increasingly popular, which introduces the new security challenge to protect user's location privacy. On one hand, a user expects to report his own location as far as possible away from his real location to protect his location privacy. On the other hand, in order to obtain high quality of service (QoS), users are required to report their locations as accurate as possible. To achieve the dedicated tradeoff between privacy requirement and QoS requirement, we propose a novel approach based on cloaking technique. We also discuss the disadvantage of the traditional general system model and propose an improved model. The basic idea of our approach is to select a sub-area from the generated cloaking area as user's reported location. The sub-area may not contain a user's real location, which prevents an adversary from performing attack with side information. Specifically, by defining an objective function with a novel location privacy metric and a QoS metric, we are able to convert the privacy issue to an optimization problem. Then, location privacy metric and QoS metric are given. To reduce the complexity of the optimization, a heuristic algorithm is proposed. Through privacy-preserving analysis and comparison with related work [8], we demonstrate the effectiveness and efficiency of our approach.
Keywords: data protection; invisibility cloaks; mobility management (mobile radio); optimisation; quality of service; smart phones; telecommunication security; QoS metric; cloaking-based approach; heuristic algorithm; location privacy metric; location-based services; mobile devices; optimization problem; privacy preserving analysis; privacy requirement; security; user location privacy protection; Complexity theory; Heuristic algorithms; Measurement; Optimization; Privacy; Quality of service; Servers; Cloaking Area; Location Privacy; Location-based Services; k-anonymity (ID#: 15-6317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895872&isnumber=6895198 


Sam, M.M.; Vijayashanthi, N.; Sundhari, A., “An Efficient Pseudonymous Generation Scheme with Privacy Preservation for Vehicular Communication,” Intelligent Computing Applications (ICICA), 2014 International Conference on, vol., no., pp. 109, 117, 6-7 March 2014. doi:10.1109/ICICA.2014.32
Abstract: Vehicular Ad-Hoc Network (VANET) communication has recently become an increasingly popular research topic in the area of wireless networking as well as the automotive industries. The goal of VANET research is to develop a vehicular communication system to enable quick and cost efficient distribution of data for the benefit of passengers safety and comfort. But location privacy in vanet is still an imperative issue. To overcome this privacy, a popular approach that is recommended in vanet is that vehicles periodically change their pseudonyms when they broadcast safety messages. An Effective pseudonym changing at proper location(e.g., a road intersection when the traffic light turns red or a free parking lot near a shopping mall) (PCP) strategy to achieve the provable location privacy. In addition, we use Bilinear Pairing for self-delegated key generation. Current threat model primarily considers that an adversary can track a vehicle that can utilize more character factors to track a vehicle and to explore new location-privacy-enhanced techniques under such a stronger threat model.
Keywords: telecommunication security; vehicular ad hoc networks; VANET communication; VANET research; bilinear pairing; effective pseudonym changing; location-privacy-enhanced techniques; privacy preservation; pseudonymous generation scheme; road intersection; self-delegated key generation; vehicular ad-hoc network; vehicular communication; vehicular communication system; wireless networking; Analytical models; Authentication; Privacy; Roads; Safety; Vehicles; Vehicular ad hoc networks; Group- Signature-Based (GSB); Pseudonym Changing at Proper Location (PCP); RoadSide Units (RSUs); Trusted Authority (TA) (ID#: 15-6318)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6965022&isnumber=6964987 


Depeng Li; Aung, Z.; Williams, J.; Sanchez, A., “P2DR: Privacy-Preserving Demand Response System in Smart Grids,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 41, 47, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785302
Abstract: Demand response programs are widely used to balance the supply and the demand in smart grids. They result in a reliable electric power system. Unfortunately, the privacy violation is a pressing challenge and increasingly affects the demand response programs because of the fact that power usage and operational data can be misused to infer personal information of customers. Without a consistent privacy preservation mechanism, adversaries can capture, model and divulge customers' behavior and activities at almost every level of society. This paper investigates a set of new privacy threat models focusing on financial rationality verse inconvenience. Furthermore, we design and implement a privacy protection protocol based on attributed-based encryptions. To demonstrate its feasibility, the protocol is adopted in several kinds of demand response programs. Real-world experiments show that our scheme merely incurs a substantially light overhead, but can address the formidable privacy challenges that customers are facing in demand response systems.
Keywords: cryptographic protocols; data privacy; power system reliability; smart power grids; P2DR; attributed-based encryptions; customer personal information; financial rationality verse inconvenience; operational data; power usage; privacy protection protocol; privacy threat; privacy-preserving demand response system; reliable electric power system; smart grids; substantially light overhead; supply demand balance; Control systems; Data privacy; Encryption; Load management; Protocols; Consumer privacy; Demand Response; Privacy Preservation; Smart Grids (ID#: 15-6319)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785302&isnumber=6785290 


Gaofeng He; Ming Yang; Xiaodan Gu; Junzhou Luo; Yuanyuan Ma, “A Novel Active Website Fingerprinting Attack against Tor Anonymous System,” Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, vol., no., pp. 112, 117, 21-23 May 2014. doi:10.1109/CSCWD.2014.6846826
Abstract: Tor is a popular anonymizing network and the existing work shows that it can preserve users' privacy from website fingerprinting attacks well. However, based on our extensive analysis, we find it is the overlap of web objects in returned web pages that make the traffic features obfuscated, thus degrading the attack detection rate. In this paper, we propose a novel active website fingerprinting attack under Tor's local adversary model. The main idea resides in the fact that the attacker can delay HTTP requests originated from users for a certain period to isolate responding traffic segments containing different web objects. We deployed our attack in PlanetLab and the experiment lasted for one month. The SVM multi-classification algorithm was then applied on the collected datasets with the introduced features to identify the visited website among 100 top ranked websites in Alexa. Compared to the stat-of-the-art work, the classification result is improved from 48.5% to 65% by delaying at most 10 requests. We also analyzed the timing characteristics of Tor traffic to prove the stealth of our attack. The research results show that anonymity in Tor is not as strong as expected and should be enhanced in the future.
Keywords: Web sites; pattern classification; security of data; support vector machines; Alexa; HTTP requests; PlanetLab; SVM multiclassification algorithm; Tor anonymous system; Tor traffic; Web objects; Web pages; novel active Website fingerprinting attack; timing characteristics; traffic features; Accuracy; Browsers; Delays; Fingerprint recognition; Protocols; Support vector machines; Tor; active website fingerprinting; anonymous communication; pattern recognition; privacy; traffic analysis (ID#: 15-6320)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846826&isnumber=6846800 


Le Ny, J.; Touati, A.; Pappas, G.J., “Real-Time Privacy-Preserving Model-Based Estimation of Traffic Flows,” Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on, vol., no., pp. 92, 102, 14-17 April 2014.  doi:10.1109/ICCPS.2014.6843714
Abstract: Road traffic information systems rely on data streams provided by various sensors, e.g., loop detectors, cameras, or GPS, containing potentially sensitive location information about private users. This paper presents an approach to enhance real-time traffic state estimators using fixed sensors with a privacy-preserving scheme providing formal guarantees to the individuals traveling on the road network. Namely, our system implements differential privacy, a strong notion of privacy that protects users against adversaries with arbitrary side information. In contrast to previous privacy-preserving schemes for trajectory data and location-based services, our procedure relies heavily on a macroscopic hydrodynamic model of the aggregated traffic in order to limit the impact on estimation performance of the privacy-preserving mechanism. The practicality of the approach is illustrated with a differentially private reconstruction of a day of traffic on a section of I-880 North in California from raw single-loop detector data.
Keywords: data privacy; real-time systems; road traffic; state estimation; traffic information systems; data streams; real-time privacy-preserving model; real-time traffic state estimators; road network; road traffic information systems; traffic flow estimation; Data privacy; Density measurement; Detectors; Privacy; Roads; Vehicles; Differential privacy; intelligent transportation systems; privacy-preserving data assimilation (ID#: 15-6321)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843714&isnumber=6843703 


Mardziel, P.; Alvim, M.S.; Hicks, M.; Clarkson, M.R., “Quantifying Information Flow for Dynamic Secrets,” Security and Privacy (SP), 2014 IEEE Symposium on, vol., no., pp. 540, 555, 18-21 May 2014. doi:10.1109/SP.2014.41
Abstract: A metric is proposed for quantifying leakage of information about secrets and about how secrets change over time. The metric is used with a model of information flow for probabilistic, interactive systems with adaptive adversaries. The model and metric are implemented in a probabilistic programming language and used to analyze several examples. The analysis demonstrates that adaptivity increases information flow.
Keywords: cryptography; high level languages; interactive systems; probability; dynamic secrets; information flow; information leakage; interactive systems; probabilistic programming language; probabilistic systems; Adaptation models; Automata; Context; History; Measurement; Probabilistic logic; Security; dynamic secret; gain function; probabilistic programming; quantitative information flow; vulnerability (ID#: 15-6322)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956586&isnumber=6956545 


Chaohui Du; Guoqiang Bai, “Attacks on Physically-Embedded Data Encryption for Embedded Devices,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 967, 972, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.128
Abstract: Data encryption is the primary method to protect embedded devices in the hostile environment. The security of the traditional data encryption algorithms relies on keeping the keys secret and they always require a lot of arithmetic and logical computations, which may be not suitable for area critical or power critical embedded devices. At TrustCom 2013, Hou et al. Proposed to use a physical unclonable function (PUF) to build a novel physically-embedded data encryption (PEDE) for embedded devices. The PEDE is lightweight since all it does is xor-ing the plaintext with the output of a PUF. As the PUF is unique and unclonable, only the original physical device can decrypt the cipher text. Without possessing the original PEDE device, adversaries could not determine anything about the plaintext even if both the secret key and the cipher text are available to them. In this paper, we show that the existing PEDE architecture is sensitive to environmental variations, which leads to the fact that the decrypted plaintext does not equal to the original plaintext. Besides the lack of reliability, we also show that the existing PEDE architecture is vulnerable to known-plaintext attack and modeling attack. To address these issues, we propose a secure and robust PEDE architecture.
Keywords: cryptography; PEDE architecture; arithmetic computations; cipher text; embedded devices; known-plaintext attack; logical computations; modeling attack; physically-embedded data encryption; secret key; Computer architecture; Delays; Encryption; Generators; Robustness; Embedded device; Encryption; Known-plaintext attack; Modeling attack; Physical effect; Physical unclonable function; Reliability; Security (ID#: 15-6323)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011354&isnumber=7011202
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
 


Biometric Encryption and Privacy, 2014

 
SoS Logo

Biometric Encryption and Privacy

2014


The use of biometric encryption to control access and authentication is well established. New concerns about privacy create new issues for biometric encryption, however. The increased use of Cloud architectures compounds the problem of providing continuous re-authentication. The research cited here examines these issues. All work was presented in 2014.



Omar, M.N.; Salleh, M.; Bakhtiari, M., “Biometric Encryption to Enhance Confidentiality in Cloud Computing,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 45, 50, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013092
Abstract: Virtualization technology is the base technology used in Cloud computing. Therefore, virtualization enables Cloud computing to provide hardware and software services to the users on demand. Actually, many companies migrates to the Cloud computing for many reasons such as capabilities of processor, bus speed, size of storage, memory and managed to reduce the cost of dedicated servers. However, virtualization and Cloud computing contain many security weaknesses that affects the biometric data confidentiality in the Cloud computing. Those security issues are VM ware escape, hopping, mobility, diversity monitoring and etc. Furthermore, the privacy of a particular user is an issue in biometric data i.e. the face reorganization data for a famous and important people. Therefore, this paper proposed biometric encryption to improve the confidentiality in Cloud computing for biometric data. Also, this paper discussed virtualization for Cloud computing, as well as biometrics encryption. Indeed, this paper overviewed the security weaknesses of Cloud computing and how biometric encryption can improve the confidentiality in Cloud computing environment. Apart from this, confidentiality is enhanced in Cloud computing by using biometric encryption for biometric data. The novel approach of biometric encryption is to enhance the biometric data confidentiality in Cloud computing.
Keywords: biometrics (access control); cloud computing; cryptography; virtualisation; VM ware; biometric data confidentiality; biometric encryption; cloud computing; face reorganization data; hardware services; software services; virtualization technology; Bioinformatics; Biometrics (access control); Cloud computing; Encryption; Hardware; Virtualization; Biometric Encryption; Cloud computing; Virtualization (ID#: 15-5994)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013092&isnumber=7013076

 

Valarmathi, R.; Sathiya Priya, S.S.; Kumar, P.K.; Sivamangai, N.M., “A Biometric Encryption Using Face Recognition System for Watch List,” Information Communication and Embedded Systems (ICICES), 2014 International Conference on, vol., no.,
pp. 1, 5, 27-28 Feb. 2014. doi:10.1109/ICICES.2014.7034018
Abstract: In recent years, it is necessary to protect individual privacy from unauthenticated persons. This paper presents a biometric encryption using face recognition system to identify known and unsuspected persons in watch list applications. The face recognition system is done by the Eigen face approach by PCA algorithm to simplify the authentication process, then to secure the Eigen faces generate the cryptographic key (RNG) and which key is bind that facial images. The faces are accepted as known or unknown face by after verification with present database.
Keywords: biometrics (access control); cryptography; data privacy; face recognition; principal component analysis; PCA algorithm; RNG; authentication process; biometric encryption; cryptographic key; eigen face approach; face recognition system; facial images; individual privacy protection; watch list; Databases; Encryption; Face; Face recognition; Feature extraction; Vectors; Biometric Encryption; Face Recognition; Privacy (ID#: 15-5995)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034018&isnumber=7033740

 

Sharma, S.; Balasubramanian, V., “A Biometric Based Authentication and Encryption Framework for Sensor Health Data in Cloud,” Information Technology and Multimedia (ICIMU), 2014 International Conference on, vol., no., pp. 49, 54, 18-20 Nov. 2014. doi:10.1109/ICIMU.2014.7066602
Abstract: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms.
Keywords: biometrics (access control); cloud computing; cryptography; data privacy; health care; medical information systems; message authentication; patient monitoring; biometric based authentication; cloud-based HMA; cloud-based shared storage; encryption framework; privacy; remote healthcare monitoring application; self-protective security framework; sensor health data; Authentication; Bismuth; Encryption; Fingerprint recognition; Fingers; Medical services; Monitoring; Biometric; Biosensor; Cloud; Data Protection; Healthcare; Sensor data; self-protective (ID#: 15-5996)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066602&isnumber=7066586

 

Sedenka, J.; Balagani, K.S.; Phoha, V.; Gasti, P., “Privacy-Preserving Population-Enhanced Biometric Key Generation from Free-Text Keystroke Dynamics,” Biometrics (IJCB), 2014 IEEE International Joint Conference on, vol., no., pp. 1, 8, Sept. 29 2014–Oct. 2 2014. doi:10.1109/BTAS.2014.6996244
Abstract: Biometric key generation techniques are used to reliably generate cryptographic material from biometric signals. Existing constructions require users to perform a particular activity (e.g., type or say a password, or provide a handwritten signature), and are therefore not suitable for generating keys continuously. In this paper we present a new technique for biometric key generation from free-text keystroke dynamics. This is the first technique suitable for continuous key generation. Our approach is based on a scaled parity code for key generation (and subsequent key reconstruction), and can be augmented with the use of population data to improve security and reduce key reconstruction error. In particular, we rely on linear discriminant analysis (LDA) to obtain a better representation of discriminable biometric signals. To update the LDA matrix without disclosing user's biometric information, we design a provably secure privacy-preserving protocol (PP-LDA) based on homomorphic encryption. Our biometric key generation with PP-LDA was evaluated on a dataset of 486 users. We report equal error rate around 5% when using LDA, and below 7% without LDA.
Keywords: biometrics (access control); cryptographic protocols; private key cryptography; LDA matrix update; PP-LDA; continuous key generation; cryptographic material generation; discriminable biometric signal representation; free-text keystroke dynamics; homomorphic encryption; key reconstruction; key reconstruction error reduction; linear discriminant analysis; population data; privacy-preserving population-enhanced biometric key generation; provably secure privacy-preserving protocol; scaled parity code; security improvement; user biometric information; Cryptography; Error correction codes; Feature extraction; Measurement; Protocols; Vectors (ID#: 15-5997)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996244&isnumber=6996217

 

Abidin, A.; Mitrokotsa, A., “Security Aspects of Privacy-Preserving Biometric Authentication Based on Ideal Lattices and Ring-LWE,” Information Forensics and Security (WIFS), 2014 IEEE International Workshop on, vol., no., pp. 60, 65, 3-5 Dec. 2014. doi:10.1109/WIFS.2014.7084304
Abstract: In this paper, we study the security of two recently proposed privacy-preserving biometric authentication protocols that employ packed somewhat homomorphic encryption schemes based on ideal lattices and ring-LWE, respectively. These two schemes have the same structure and have distributed architecture consisting of three entities: a client server, a computation server, and an authentication server. We present a simple attack algorithm that enables a malicious computation server to learn the biometric templates in at most 2N-τ queries, where N is the bit-length of a biometric template and τ the authentication threshold. The main enabler of the attack is that a malicious computation server can send an encryption of the inner product of the target biometric template with a bitstring of his own choice, instead of the securely computed Hamming distance between the fresh and stored biometric templates. We also discuss possible countermeasures to mitigate the attack using private information retrieval and signatures of correct computation.
Keywords: biometrics (access control); client-server systems; cryptographic protocols; message authentication; Hamming distance; attack algorithm; authentication server; authentication threshold; client server; distributed architecture; homomorphic encryption scheme; malicious computation server; privacy-preserving biometric authentication protocol; private information retrieval; ring-LWE; security aspects; target biometric template; Authentication; Encryption; Protocols; Public key; Servers; Privacy-preserving biometric authentication; hill climbing attack; lattices; ring-LWE; somewhat homomorphic encryption (ID#: 15-5998)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084304&isnumber=7084286

 

Barman, S.; Chattopadhyay, S.; Samanta, D., “An Approach to Cryptographic Key Distribution through Fingerprint Based Key Distribution Center,” Advances in Computing, Communications and Informatics (ICACCI), 2014 International Conference on, vol., no., pp. 1629, 1635, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968299
Abstract: In information and communication technology, security of information is provided with cryptography. In cryptography, key management is an important part of the whole system as the security lies on secrecy of cryptographic key. Symmetric cryptography uses same key (secret key) for message encryption as well as cipher text decryption. Distribution of the secret key is the main challenge in symmetric cryptography. In symmetric cryptography, key distribution center (KDC) takes the responsibility to distribute the secret key between the communicating parties to establish a secure communication among them. In the traditional KDC, a unique key is used between communicating parties for the purpose of distributing session keys. In this respect, our proposed approach uses fingerprint biometrics of communicating parties for the purpose of unique key generation and distribute session key with the fingerprint based key of user. As the key is generated from fingerprint of user, there is no scope of attacks to break the unique key. In this way, the unique key is associated with biometric data of communicating party and the key is not need to remember by that party. This approach converts the knowledge based authentication to biometric based authentication of KDC. At the same time, our approach protects the privacy of fingerprint identity as the identity of user is not disclosed even when the KDC is compromised.
Keywords: cryptography; data privacy; fingerprint identification; message authentication; biometric based authentication; cipher text decryption; cryptographic key distribution; fingerprint based key distribution center; fingerprint biometrics; fingerprint identity privacy; information security; key management; knowledge based authentication; message encryption; secret key distribution; symmetric cryptography; Bioinformatics; Biometrics (access control); Cryptography; Feature extraction; Fingerprint recognition; Image matching; Vectors; Cryptographic key; Cryptography; Fingerprint; Fingerprint based cryptographic key; Key Distribution Center; Secret key (ID#: 15-5999)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968299&isnumber=6968191

 

Yuan Tian; Al-Rodhaan, M.; Biao Song; Al-Dhelaan, A.; Ting Huai Ma, “Somewhat Homomorphic Cryptography for Matrix Multiplication Using GPU Acceleration,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 166, 170, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013115
Abstract: Homomorphic encryption has become a popular research topic since the cloud computing paradigm emerged. This paper discusses the design of a GPU-assisted homomorphic cryptograph for matrix operation. Our proposed scheme is based on an n*n matrix multiplication which are computationally homomorphic. We use more efficient GPU programming scheme with the extension of DGHV homomorphism, which prove the result of verification does not leak any information about the inputs or the output during the encryption and decryption. The performance results are obtained from the executions on a machine equipped with a GeForce GTX 765M GPU. We use three basic parallel algorithms to form efficient solutions which accelerate the speed of encryption and evaluation. Although fully homomorphic encryption is still not practical for real world applications in current stage, this work shows the possibility to improve the performance of homomorphic encryption and achieve this target one step closer.
Keywords: cryptography; graphics processing units; matrix multiplication; parallel algorithms; DGHV homomorphism; GPU acceleration; GPU programming; GeForce GTX 765M GPU; decryption; homomorphic cryptography; matrix multiplication; parallel algorithms; Acceleration; Educational institutions; Encryption; Graphics processing units; Public key; Cloud; Cryptography; GPU; Homomorphic encryption; Matrix multiplication; Privacy; Security (ID#: 15-6000)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013115&isnumber=7013076

 

Lakhera, M., “Enhancing Security of Stored Biometric Data,” Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH), 2014 Innovative Applications of , vol., no., vol., no., pp. 515, 518, 28-29 Nov. 2014. doi:10.1109/CIPECH.2014.7019043
Abstract: A biometric system is weak to a different type of attacks targeted at undermining the reliability of the verification process. These attacks are either imposter or irrevocability. Imposter means information stored in database, can be abused to construction of artificial biometric and replace it for fake authentication and irrevocability means once compromised, biometric not be updated, reissued or destroyed. In this paper we present a general architecture with the help of Digital Signature that guarantees privacy protection of biometric data. We specifically focus on secure a biometric data at the time of authentication and storage.
Keywords: biometrics (access control); data privacy; security of data; artificial biometric; biometric data security; digital signature; privacy protection; verification process reliability; Bioinformatics; Biometrics (access control); Data mining; Databases; Feature extraction; Receivers; Security; Encryption; Password; Private Key; Public Key; Verification (ID#: 15-6001)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019043&isnumber=7018206

 

Bissessar, D.; Adams, C.; Dong Liu, “Using Biometric Key Commitments to Prevent Unauthorized Lending of Cryptographic Credentials,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 75, 83, 23-24 July 2014. doi:10.1109/PST.2014.6890926
Abstract: We present a technique that uses privacy enhancing technologies and biometrics to prevent the unauthorized lending of credentials. Current credential schemes suffer the weakness that issued credentials can be transferred between users. Our technique ensures the biometric identity of the individual executing the Issue and Show protocols of an existing credential system in a manner analogous to the enrollment and verification steps in traditional biometric systems. During Issue we create Pedersen commitments on biometrically derived keys obtained from fuzzy extractors. This issue-time commitment is sealed into the issued credential. During Show a verification-time commitment is generated. Correspondence of keys is verified using a zero-knowledge proof of knowledge. The proposed approach preserves the security of the underlying credential system, protects the privacy of the biometric, and generalizes to multiple biometric modalities. We illustrate the usage of our technique by showing how it can be incorporated into digital credentials and anonymous credentials.
Keywords: cryptography; data privacy; Pedersen commitments; anonymous credentials; biometric identity; biometric key commitments; biometric modalities; credential schemes; credential system; cryptographic credentials; digital credentials; fuzzy extractors; issue protocol; issue-time commitment;  show protocol; Data mining; Encryption; Measurement; Privacy; Protocols; biometrics; non-transferability; privacy enhancing technologies (ID#: 15-6002)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890926&isnumber=6890911

 

Uluagac, A.S.; Wenyi Liu; Beyah, R., “A Multi-Factor Re-Authentication Framework with User Privacy,” Communications and Network Security (CNS), 2014 IEEE Conference on, vol., no., pp. 504, 505, 29-31 Oct. 2014. doi:10.1109/CNS.2014.6997526
Abstract: Continuous re-authentication of users is a must to protect connections with long duration against any malicious activity. Users can be re-authenticated in numerous ways. One popular way is an approach that requires the presentation of two or more authentication factors (i.e., knowledge, possession, identity) called Multi-factor authentication (MFA). Given the market dominance of ubiquitous computing systems (e.g., cloud), MFA systems have become vital in re-authenticating users. Knowledge factor (i.e., passwords) is the most ubiquitous authentication factor; however, forcing a user to re-enter the primary factor, a password, at frequent intervals could significantly lower the usability of the system. Unfortunately, an MFA system with a possession factor (e.g., Security tokens) usually depends on the distribution of some specific device, which is cumbersome and not user-friendly. Similarly, MFA systems with an identity factor (e.g., physiological biometrics, keystroke pattern) suffer from a relatively low deployability and are highly intrusive and expose users sensitive information to untrusted servers. These servers can keep physically identifying elements of users, long after the user ends the relationship with the server. To address these concerns, in this poster, we introduce our initial design of a privacy-preserving multi-factor re-authentication framework. The first factor is a password while the second factor is a hybrid profile of user behavior with a large combination of host- and network-based features. Our initial results are very promising as our framework can successfully validate legitimate users while detecting impostors.
Keywords: authorisation; cryptography; data privacy; MFA system; authentication factor; knowledge factor; possession factor; privacy-preserving multifactor re-authentication framework; ubiquitous computing system; user privacy; Authentication; Cloud computing; Educational institutions; Encryption; Privacy; Servers; Usability; Fully Homomorphic Encryption; Fuzzy Hashing; Privacy-Preserving Reauthentication; Re-authentication in Cloud (ID#: 15-6003)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997526&isnumber=6997445

 

Al-Jaberi, M.F.; Zainal, A., “Data Integrity and Privacy Model in Cloud Computing,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 280, 284, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013135
Abstract: Cloud computing is the future of computing industry and it is believed to be the next generation of computing technology. Among the major concern in cloud computing is data integrity and privacy. Clients require their data to be safe and private from any tampering or unauthorized access. Various algorithms and protocols (MD5, AES, and RSA-based PHE) are implemented by the various components of this model to provide the maximum levels of integrity management and privacy preservation for data stored in public cloud such as Amazon S3. The impact of algorithms and protocols, used to ensure data integrity and privacy, is studied to test the performance of the proposed model. The prototype system showed that data integrity and privacy are ensured against unauthorized parties. This model reduces the burden of checking the integrity of data stored in cloud storage by utilizing a third party, integrity checking service, and applies security mechanism that ensure privacy and confidentiality of data stored in cloud computing. This paper proposes an architecture based model that provides data integrity verification and privacy preserving in cloud computing.
Keywords: authorisation; cloud computing; data integrity; data privacy; unauthorized access; Cloud computing; Computational modeling; Data models; Data privacy; Encryption; Amazon S3 (ID#: 15-6004)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013135&isnumber=7013076 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Compositional Security, 2014

 

 
SoS Logo

Compositional Security

2014


The Hard Problem of composability and scalability remains a focus of the four Science of Security Lablets. However, aside from them, there was relatively little research reported in 2014 in this area.



Rafnsson, W.; Sabelfeld, A., “Compositional Information-Flow Security for Interactive Systems,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 277, 292, 19-22 July 2014. doi:10.1109/CSF.2014.27
Abstract: To achieve end-to-end security in a system built from parts, it is important to ensure that the composition of secure components is itself secure. This work investigates the compositionality of two popular conditions of possibilistic noninterference. The first condition, progress-insensitive noninterference (PINI), is the security condition enforced by practical tools like JSFlow, Paragon, sequential LIO, Jif, Flow Caml, and SPARK Examiner. We show that this condition is not preserved under fair parallel composition: composing a PINI system fairly with another PINI system can yield an insecure system. We explore constraints that allow recovering compositionality for PINI. Further, we develop a theory of compositional reasoning. In contrast to PINI, we show what PSNI behaves well under composition, with and without fairness assumptions. Our work is performed within a general framework for nondeterministic interactive systems.
Keywords: interactive systems; security of data; Flow Caml; JSFlow; Jif; PINI system; Paragon; SPARK Examiner; compositional information-flow security; compositional reasoning; end-to-end security; nondeterministic interactive systems; parallel composition; possibilistic noninterference; progress-insensitive noninterference; sequential LIO; Cognition; Computational modeling; Interactive systems; Security; Semantics; Sensitivity; Synchronization (ID#: 15-6038)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957117&isnumber=6957090

 

Cong Sun; Ning Xi; Jinku Li; Qingsong Yao; Jianfeng Ma, “Verifying Secure Interface Composition for Component-Based System Designs,” Software Engineering Conference (APSEC), 2014 21st Asia-Pacific , vol.1, no., pp. 359, 366, 1-4 Dec. 2014. doi:10.1109/APSEC.2014.60
Abstract: Information flow security has been considered as a critical requirement on software systems, especially when heterogeneous components from different parties cooperate to achieve end-to-end enforcement on data confidentiality. Enforcing the information flow security properties on complicated systems faces a great challenge because the properties cannot be preserved under composition and most of the current approaches are not scalable enough. To address this problem, there have been several recent efforts on the compositional information flow analyses developed for different abstraction levels. But these approaches have rarely been considered to incorporate with the process of system design. Integrating the security enforcement with the model-based development process can provide the designer with ability to verify information flow security in the early stage of system development. We propose a compositional information flow verification which is integrated with model-based system design in Sys ML by an automated model translation from semi-formal behavior and structure models to interface automata. Our compositional approach is general to support the complex security lattices and a variety of in distinguish ability relations. The evaluation results show the usability of our approach on practical system designs and the scalability of the compositional verification.
Keywords: automata theory; object-oriented programming; security of data; software architecture; SysML; abstraction levels; automated model translation; complex security lattices; component-based system designs; compositional information flow analyses; data confidentiality; heterogeneous components; information flow security; interface automata; model-based development process; model-based system design; secure interface composition verification; security enforcement; semiformal behavior; software systems; structure models; Artificial intelligence; Automata; Component architectures; Lattices; Modeling; Security; Unified modeling language; component-based design; information flow; interface automata; model translation; model-based development; noninterference; systems modeling language (ID#: 15-6039)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091331&isnumber=7091273

 

Alzubaidi, W.Kh.; Longzheng Cai; Alyawer, S.A., “Enhance the Performance of ICMP Protocol by Reduction the IP over Ethernet Naming Architecture,” Computer and Information Sciences (ICCOINS), 2014 International Conference on, vol., no.,
pp. 1, 6, 3-5 June 2014. doi:10.1109/ICCOINS.2014.6868392
Abstract: This study addresses the Internet Control Message Protocol (ICMP) performance problems arising in Internet protocol (IP) over Ethernet networks. These problems arise because of the compatibility issue between two different compositional protocols: the IP and Ethernet protocol. The motivation behind addressing the compatibility problem is to improve the security and performance of networks by studying the link compatibility between the Ethernet and IP protocols. The findings of this study have given rise to proposals for modifications. A reduction in the current naming architecture design is advocated to improve the performance and security of IP over Ethernet networks. The use of the IP address, as one flat address for the naming architecture, is proposed instead of using both the IP and Media Access Control (MAC) addresses. The proposed architecture is evaluated through a simulated cancellation of the use of the Address Resolution Protocol (ARP) protocol.
Keywords: IP networks; computer network performance evaluation; computer network security; local area networks; transport protocols; ARP protocol; Ethernet naming architecture; Ethernet protocol; ICMP performance problems; ICMP protocol performance; IP address; IP over Ethernet networks; IP protocols; Internet control message protocol; address resolution protocol; compatibility problem; compositional protocols; flat address; link compatibility; naming architecture design; network performance; security; Computer architecture; Computers; Ethernet networks; IP networks; Protocols; Security; Unicast; Ethernet Networks; ICMP; IP; MAC; Naming Architecture; Performance (ID#: 15-6040)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868392&isnumber=6868339

 

Modersheim, S.; Katsoris, G., “A Sound Abstraction of the Parsing Problem,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol., no., pp. 259, 273, 19-22 July 2014. doi:10.1109/CSF.2014.26
Abstract: In formal verification, cryptographic messages are often represented by algebraic terms. This abstracts not only from the intricate details of the real cryptography, but also from the details of the non-cryptographic aspects: the actual formatting and structuring of messages. We introduce a new algebraic model to include these details and define a small, simple language to precisely describe message formats. We support fixed-length fields, variable-length fields with offsets, tags, and encodings into smaller alphabets like Base64, thereby covering both classical formats as in TLS and modern XML-based formats. We define two reasonable properties for a set of formats used in a protocol suite. First, each format should be un-ambiguous: any string can be parsed in at most one way. Second, the formats should be pair wise disjoint: a string can be parsed as at most one of the formats. We show how to easily establish these properties for many practical formats. By replacing the formats with free function symbols we obtain an abstract model that is compatible with all existing verification tools. We prove that the abstraction is sound for un-ambiguous, disjoint formats: there is an attack in the concrete message model if there is one in the abstract message model. Finally we present highlights of a practical case study on TLS.
Keywords: XML; cryptography; formal verification; grammars; program compilers; Base64; TLS; XML-based formats; abstract model; algebraic model; algebraic terms; concrete message model; cryptographic messages; fixed-length fields; formal verification; message formats; noncryptographic aspects; parsing problem; sound abstraction; variable-length fields; Abstracts; Algebra; Encoding; Encryption; Law; Protocols; Security protocols; compositional reasoning; formal verification; message formats; soundness (ID#: 15-6041)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957116&isnumber=6957090

 

Tzimiropoulos, G.; Alabort-i-Medina, J.; Zafeiriou, S.P.; Pantic, M., “Active Orientation Models for Face Alignment In-the-Wild,” Information Forensics and Security, IEEE Transactions on, vol. 9, no. 12, pp. 2024, 2034, Dec. 2014. doi:10.1109/TIFS.2014.2361018
Abstract: We present Active Orientation Models (AOMs), generative models of facial shape and appearance, which extend the well-known paradigm of Active Appearance Models (AAMs) for the case of generic face alignment under unconstrained conditions. Robustness stems from the fact that the proposed AOMs employ a statistically robust appearance model based on the principal components of image gradient orientations. We show that when incorporated within standard optimization frameworks for AAM learning and fitting, this kernel Principal Component Analysis results in robust algorithms for model fitting. At the same time, the resulting optimization problems maintain the same computational cost. As a result, the main similarity of AOMs with AAMs is the computational complexity. In particular, the project-out version of AOMs is as computationally efficient as the standard project-out inverse compositional algorithm, which is admittedly one of the fastest algorithms for fitting AAMs. We verify experimentally that: 1) AOMs generalize well to unseen variations and 2) outperform all other state-of-the-art AAM methods considered by a large margin. This performance improvement brings AOMs at least in par with other contemporary methods for face alignment. Finally, we provide MATLAB code at http://ibug.doc.ic.ac.uk/resources.
Keywords: computational complexity; face recognition; optimisation; principal component analysis; AAM learning; AAMs; AOMs; MATLAB code; active appearance models; active orientation models; computational complexity; computational cost; face alignment in-the-wild; facial shape; generative models; generic face alignment; image gradient orientations; kernel principal component analysis; model fitting; optimization frameworks; project-out inverse compositional algorithm; unconstrained conditions; Active appearance model; Deformable models; Face; Principal component analysis; Robustness; Shape; Active Appearance Models; Active Orientation Models; Active orientation models; Face alignment; active appearance models; face alignment (ID#: 15-6042)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914605&isnumber=6953163

 

Litian Xiao; Mengyuan Li; Ming Gu; Jiaguang Sun, “A Hierarchy Framework on Compositional Verification for PLC Software,” Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, vol., no., pp. 204, 207, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933545
Abstract: The correctness verification of embedded control software has become an important research topic in embedded system field. The paper analyses the present situation on correctness verification of control software as well as the limitations of existing technologies. In order to the high reliability and high security requirements of control software, the paper proposes a hierarchical framework and architecture of control software (PLC program) verification. The framework combines the technologies of testing, model checking and theorem proving. The paper introduces the construction, flow and key elements of the architecture.
Keywords: control engineering computing; embedded systems; program verification; programmable controllers; theorem proving; PLC program verification; PLC software; compositional verification; control software verification; correctness verification ;embedded control software; embedded system field; hierarchical framework; model checking; security requirements; theorem proving; Computer architecture; Computer bugs; Mathematical model; Model checking; Semantics; Software; PLCsoftware; compositional verification; hierarchy framework; verification architecture (ID#: 15-6043)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933545&isnumber=6933501

 

Herber, P., “The RESCUE Approach —Towards Compositional Hardware/Software Co-verification,” High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, vol., no., pp. 721, 724, 20-22 Aug. 2014. doi:10.1109/HPCC.2014.109
Abstract: In the last decades, there has been a lot of work on formal verification techniques for embedded hardware/software systems. The main barriere for the application of these techniques in industrial application is the state-space explosion problem, i.e., The lacking scalability of formal verification. To tackle this problem, we propose a modular verification framework that supports the whole design flow of embedded HW/SW system combining a variety of verification techniques, ranging from formal hardware verification over software verification to system verification. We target the system level design language SystemC, which has become the de facto standard in HW/SW co-design, but severely lacks support for automated and comprehensive verification. To achieve a modular and automatable verification flow, we start with a definition of an intermediate representation for SystemC (SysCIR). Then, we process the SysCIR by a set of modular engines. First, we aim at developing innovative slicing and abstraction engines, which significantly reduce the semantic state space. Second, we aim at providing a set of transformation engines that target a variety of verification tools. In particular, we combine hardware, software and system verification techniques in order to cope with the different models of computation inherently intertwined in embedded HW/SW systems.
Keywords: C language; embedded systems; formal verification; hardware-software codesign; state-space methods; HW/SW codesign; RESCUE approach; SysCIR; SystemC; abstraction engine; automatable verification flow; automated verification; comprehensive verification; de facto standard; design flow; embedded HW/SW system; embedded hardware/software system; formal hardware verification; formal verification technique; hardware/software co-verification; industrial application; innovative slicing; intermediate representation; lacking scalability; modular engine; modular verification framework; software verification; state-space explosion problem; system level design language; system verification technique; transformation engine; verification tool; Computational modeling; Embedded systems; Engines; Hardware; Model checking; Semantics; Formal Verification; Hardware/Software Co-Design (ID#: 15-6044)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056823&isnumber=7056577
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Consumer Privacy in the Smart Grid, 2014

 

 
SoS Logo

Consumer Privacy in the Smart Grid

2014


Concerns about consumer privacy and electric power usage have impacted utilities' fielding of smart-meters. Securing power meter readings in a way that addresses while protecting consumer privacy is a concern of research designed to help alleviate those concerns. The research presented here was published in 2014.



Kumar, V.; Hussain, M., “Secure Communication for Advance Metering Infrastructure in Smart Grid,” India Conference (INDICON), 2014 Annual IEEE, vol., no., pp. 1, 6, 11-13 Dec. 2014.  doi:10.1109/INDICON.2014.7030600
Abstract: The electrical power industry is in the process of integration with bidirectional information and power flow infrastructure commonly called smart grid. Advance metering infrastructure (AMI) is an important component of the smart grid in which data and signal is transferred from consumer smart meter to smart grid and vice versa. Cyber security is to be considered before implementing AMI applications. For delivering Smart meter data and manage message securely, there is a need of a unique security mechanism to ensure the integration of availability and privacy. In such security mechanisms, the cryptographic overhead, including certificates and signatures, is quite significant for an embedded device like a smart meter in smart grid AMI compared to normal personal computers in a regular enterprise network. Additionally, cryptographic operations contribute significant computational cost, when recipient end verifies the message in each communication. We proposed a light and flexible protocol for secure communication between smart meters and smart grid infrastructure. The proposed protocol authenticate both control center and smart meter and also securely exchange secret key (session key) between two entities for secure communication between them. Proposed protocol help to mitigate several types of attacks on smart grid by identifying the origin of attacks against AMI. The proposed protocol is tested for security and no attack was found. Its performance is also found to be better than existing mechanism.
Keywords: cryptographic protocols; data communication; electricity supply industry; load flow; power engineering computing; private key cryptography; smart meters; smart power grids; advance metering infrastructure; attack mitigation; bidirectional information infrastructure; consumer smart meter; cryptographic overhead; cyber security; data management; electrical power industry; flexible protocol; message management; power flow infrastructure; secret key exchange; secure communication; smart grid AMI; unique security mechanism; Authentication; Protocols; Public key; Smart grids; Smart meters; AMI; smart grid; smart meter (ID#: 15-6359)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030600&isnumber=7030354

 

Lei Yang; Hao Xue; Fengjun Li, “Privacy-Preserving Data Sharing in Smart Grid Systems,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 878, 883, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007759
Abstract: The smart grid systems aim to integrate conventional power grids with modern information communication technology. While intensive research efforts have been focused on ensuring data correctness in AMI data collection and protecting data confidentiality in smart grid communications, less effort has been devoted to privacy protection in smart grid data management and sharing. In smart grid data management, the Advanced Metering Infrastructure (AMI) collects high-frequency energy consumption data, which often contains rich inhabitant and lifestyle information about the end consumers. The data is often shared with various stakeholders, such as the generators, distributors and marketers. However, the utility may not have consent of the users to share potentially sensitive data. In this paper, we develop comprehensive mechanisms to enable privacy-preserving smart data management. First, we analyze the privacy threats and consumer identifiability issues associated with high-frequency AMI data. We then present the first solution based on data sanitization, which eliminates sensitive/identifiable information before sharing usage data with external peers. Meanwhile, we present solutions based on secure multi-party computing to enable external peers to perform aggregate/statistical operations on original metering data in a privacy-preserving manner. Experiments on real-world consumption data demonstrate the validity and effectiveness of the proposed solutions.
Keywords: power engineering computing; power system measurement; smart power grids; AMI data collection; advanced metering infrastructure; consumer identifiability; data confidentiality; data sanitization; energy consumption data; information communication technology; power grids; privacy-preserving data sharing; smart grid communications; smart grid data management; smart grid systems; Aggregates; Data privacy; Energy consumption; Privacy; Servers; Smart grids; Smart meters (ID#: 15-6360)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007759&isnumber=7007609

 

Sook-Chin Yip; KokSheik Wong; Phan, R.C.-W.; Su-Wei Tan; Ku, I.; Wooi-Ping Hew, “A Privacy-Preserving and Cheat-Resilient Electricity Consumption Reporting Scheme for Smart Grids,” Computer, Information and Telecommunication Systems (CITS), 2014 International Conference on, vol., no., pp. 1, 5, 7-9 July 2014. doi:10.1109/CITS.2014.6878971
Abstract: One of the significant benefits of smart grid as compared to conventional power grid is the capability to collect fine-grained data from each smart meter remotely, thereby enabling utility provider to balance load effectively and offer time-adaptive pricing schemes. Nevertheless, the ability to read fine-granular measurements constitutes a serious privacy threat to the consumers. Therefore, it is crucial to consider the privacy issues in smart grid in order to preserve consumers' privacy and protect data integrity. In this paper, we propose a Privacy-Preserving and Cheat-Resilient (PPCR) electricity consumption reporting scheme for smart grid communication. PPCR adopts incremental hash function to conceal consumers' energy usage data from unauthorized parties, as well as the utility companies. The proposed scheme enables utility provider to perform data integrity check without disrupting smart grid services such as load management and billing. Security analysis is conducted to demonstrate that PPCR withstands malicious operations, preserves consumers' privacy and is robust to adversaries' cheating and smart meters malfunction.
Keywords: cryptography; data integrity; data protection; power consumption; power engineering computing; power system measurement; power system security; pricing; resource allocation; smart power grids; PPCR scheme; consumer energy usage data; consumer privacy preservation; data integrity check; data integrity protection; fine-grained data collection; fine-granular measurements; incremental hash function; load balancing; privacy-preserving and cheat-resilient electricity consumption reporting scheme; security analysis; smart grid communication; smart meter malfunction; smart power grids; time-adaptive pricing schemes; utility provider; Companies; Cryptography; Electricity; Privacy; Smart grids; Smart meters; cheat-resilient; integrity; privacy; smart grid (ID#: 15-6361)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878971&isnumber=6878950

 

Alharbi, K.; Xiaodong Lin; Jun Shao, “A Framework for Privacy-Preserving Data Sharing in the Smart Grid,” Communications in China (ICCC), 2014 IEEE/CIC International Conference on, vol., no., pp. 214, 219, 13-15 Oct. 2014. doi:10.1109/ICCChina.2014.7008274
Abstract: Distributed energy resources, featured with small-scale power generation technologies and renewable energy sources, have been considered as a necessary supplement for the smart grid. In order to make the merged power grid to be still smart, the data generated at the consumer side should be shared among the energy resources. However, this approach brings difficulties on how to protect consumers' privacy. To deal with the problem, in this paper, we propose a new framework for data sharing in the smart grid by using a combination of homomorphic encryption and proxy re-encryption techniques. The proposed framework allows the energy resources to be able to analyze the consumers' data while keeping the consumers' privacy. Another good property of our proposed framework is that the consumers' data is transmitted over the smart grid only once. To the best of our knowledge, our framework is first attempt to consider an important problem concerning data sharing in the smart grid.
Keywords: cryptography; data privacy; electric power generation; power system analysis computing; power system security; renewable energy sources; smart power grids; distributed energy resources; homomorphic encryption technique; power grid; privacy-preserving data sharing; proxy reencryption technique; renewable energy sources; small-scale power generation technologies; smart grid; Electricity; Encryption; Energy resources; Public key; Servers; Smart grids (ID#: 15-6362)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7008274&isnumber=7008220

 

Inshil Doh; Jiyoung Lim; Kijoon Chae, “Service Security for Smart Grid System,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2014 Eighth International Conference on, vol., no., pp. 427, 432, 2-4 July 2014. doi:10.1109/IMIS.2014.61
Abstract: One of major application service areas in M2M (Machine-to-Machine) is the smart grid system. In smart grid systems, information and communications technology are used to gather and act on information that is related to the behaviors of consumers and suppliers. The technology is to improve the efficiency, reliability, and economics of the production and distribution of electricity. To make the system more reliable, security is a very important issue. In this paper, we propose secure system architecture and a mechanism to provide security for smart grid system robustness and reliability.
Keywords: mobile communication; power distribution economics; power distribution reliability; power system security; smart power grids; M2M; electricity distribution economics; electricity distribution efficiency; electricity distribution reliability; electricity production economics; electricity production efficiency; electricity production reliability; information-communication technology; machine-to-machine; service security; smart grid system reliability; smart grid system robustness; Encryption; Privacy; Servers; Smart grids; Smart meters; M2M; privacy; security; smart grid; system architecture (ID#: 15-6363)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975501&isnumber=6975399

 

Beussink, A.; Akkaya, K.; Senturk, I.F.; Mahmoud, M.M.E.A., “Preserving Consumer Privacy on IEEE 802.11s-Based Smart Grid AMI Networks Using Data Obfuscation,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 658, 663, April 27 2014–May 2 2014. doi:10.1109/INFCOMW.2014.6849309
Abstract: While the newly envisioned Smart(er) Grid (SG) will result in a more efficient and reliable power grid, its use of fine-grained meter data has widely raised concerns of consumer privacy. In this paper, we propose to implement a data obfuscation approach to preserve consumer privacy and assess its feasibility on large-scale Advanced Metering Infrastructure (AMI) network built upon the new IEEE 802.11s wireless mesh standard. We first propose a secure obfuscation value distribution approach on this 802.11s-based wireless mesh network. Using obfuscation values provided via this approach, the meter readings are obfuscated to protect consumer privacy from eavesdroppers and the utility companies while preserving the utility companies' ability to use the data for state estimation. We assessed the impact of using this privacy approach on the data throughput and delay. Simulation results have shown that the impact of our approach on the network performance is acceptable.
Keywords: power system measurement; power system reliability; power system security; power system state estimation; smart power grids; wireless mesh networks; IEEE 802.11s wireless mesh standard; IEEE 802.11s-based smart grid AMI network; SG; advanced metering infrastructure; consumer privacy preservation; fine-grained meter data; power grid reliability; secure data obfuscation value distribution approach; state estimation; Companies; Data privacy; Logic gates; Privacy; Security; State estimation; Vectors (ID#: 15-6364)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849309&isnumber=6849127

 

Paverd, A.; Martin, A.; Brown, I., “Privacy-Enhanced Bi-Directional Communication in the Smart Grid Using Trusted Computing,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 872, 877, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-6365)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609

 

Ying Bi; Jamalipour, Abbas, “A Voluntary-Based Real-Time Incentive Scheme for Smart Grid Demand Management,” Telecommunications (ICT), 2014 21st International Conference on, vol., no., pp. 447, 451, 4-7 May 2014. doi:10.1109/ICT.2014.6845156
Abstract: In power system, consumers' power demands are highly desirable information by grid operators for asset management and grid operation. However, users may hesitate to report such private information due to the potential privacy leakage risk and other extra cost. A compulsory-based scheme which forces consumers to reveal private data, or a punishment scheme in which consumers get penalty for unwillingness to disclose might not be desired. Given the importance of demand information, in this paper, we acknowledge consumers' ownership rights on their private data and propose a novel voluntary-based real-time incentive scheme (RTIS) to promote demand management in the smart grid. In RTIS, Load Serving Entity (LSE) plays the role of power retailer. LSE rewards cooperating consumers with a discounted electricity retail price to compensate the consumers' extra cost associated with participation. By carefully selecting a discount rate, RTIS ensures that LSE can collect sufficient demand response for load anticipation without detriment to its market revenue. Simulation results confirm that our proposed scheme can achieve satisfactory social welfare even compared with compulsory demand upload schemes.
Keywords: asset management; demand side management; electricity supply industry; incentive schemes; power system economics; power system security; smart power grids; LSE; RTIS; asset management; compulsory-based scheme; demand response; discount rate selection; discounted electricity retail price; grid operation; load anticipation; load serving entity; ownership rights; potential privacy leakage risk; power demand; power industry; power system; punishment scheme; smart grid demand management; social welfare; voluntary-based real-time incentive scheme; Aggregates; Electricity; Electricity supply industry; Load modeling; Power demand; Real-time systems; Smart grids (ID#: 15-6366)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845156&isnumber=6845063

 

Mohassel, R.R.; Fung, A.S.; Mohammadi, F.; Raahemifar, K., “A Survey on Advanced Metering Infrastructure and Its Application in Smart Grids,” Electrical and Computer Engineering (CCECE), 2014 IEEE 27th Canadian Conference on, vol., no., pp. 1, 8, 4-7 May 2014. doi:10.1109/CCECE.2014.6901102
Abstract: This survey paper is an excerpt of a more comprehensive study on Smart Grid (SG) and the role of Advanced Metering Infrastructure (AMI) in SG. The survey was carried out as part of a feasibility study for the creation of a Net-Zero community in a city in Ontario, Canada. SG is not a single technology; rather it is a combination of different areas of engineering, communication and management. This paper intends to focus on AMI, which is responsible for collecting all the data and information from loads and consumers, as the foundation for SG. AMI is also responsible for implementing control signals and commands to perform necessary control actions, including Demand Side Management (DSM). In this paper we introduce SG and its features, establish the relation between SG and AMI, explain three main subsystems of AMI and discuss related security issues.
Keywords: smart meters; smart power grids; AMI; Canada; Ontario; SG; advanced metering infrastructure; net-zero community; smart grids; Privacy; Reliability; Security; Smart grids; Smart meters; Wireless communication; Zigbee; Advanced metering; Smart Grid; smart metering (ID#: 15-6367)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901102&isnumber=6900900

 

Ratliff, L.J.; Dong, R.; Ohlsson, H.; Cardenas, A.A.; Sastry, S.S., “Privacy and Customer Segmentation in the Smart Grid,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2136, 2141, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039714
Abstract: In the electricity grid, networked sensors which record and transmit increasingly high-granularity data are being deployed. In such a setting, privacy concerns are a natural consideration. In order to obtain the consumer's valuation of privacy, we design a screening mechanism consisting of a menu of contracts offered to the energy consumer with varying guarantees of privacy. The screening process is a means to segment customers. Finally, we design insurance contracts using the probability of a privacy breach to be offered by third-party insurance companies.
Keywords: consumer protection; contracts; data privacy; insurance; smart power grids; customer segmentation; electricity grid; energy consumer; insurance contracts; privacy breach probability; privacy guarantees; screening mechanism; smart grid; third-party insurance companies; Companies; Contracts; Data privacy; Insurance; Measurement; Privacy; Smart grids (ID#: 15-6368)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039714&isnumber=7039338

 

Kishimoto, H.; Okamura, S., “Secure Consolidation of Charging Information over Smart Grid Using ID Federation,” Information Theory and its Applications (ISITA), 2014 International Symposium on, vol., no., pp. 226, 230, 26-29 Oct. 2014. doi:  (not provided)
Abstract: In the current power system, a bill for electricity used through an outlet is not charged to a consumer but to the manager of the outlet. This research presents schemes in which the bill for electricity is charged to the consumer. Even if a consumer uses outlets at outside the home, all bills for the power consumption at home and the outside are consolidated in an electric utility the consumer contracts with. In this paper, we define security requirements and evaluate that the proposed schemes satisfy the requirements. We suppose the privacy for consumer and the accuracy of billing as the security requirements and use an identity federation and digital signatures to satisfy the requirements. We propose a basic scheme first, and improve the efficiency of the scheme. We prove that the number of verifications and computational complexity are reduced in the improved scheme.
Keywords: computational complexity; data privacy; invoicing; power engineering computing; security of data; smart power grids; ID federation; charging information secure consolidation; computational complexity; consumer contract; current power system; electric utility; electricity bill; power consumption; security requirements; smart grid; Electricity; Manganese; Power demand; Power industry; Public key;  Smart grids (ID#: 15-6369)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979837&isnumber=6979787

 

Depeng Li; Aung, Z.; Williams, J.; Sanchez, A., “P2DR: Privacy-Preserving Demand Response System in Smart Grids,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 41, 47, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785302
Abstract: Demand response programs are widely used to balance the supply and the demand in smart grids. They result in a reliable electric power system. Unfortunately, the privacy violation is a pressing challenge and increasingly affects the demand response programs because of the fact that power usage and operational data can be misused to infer personal information of customers. Without a consistent privacy preservation mechanism, adversaries can capture, model and divulge customers' behavior and activities at almost every level of society. This paper investigates a set of new privacy threat models focusing on financial rationality verse inconvenience. Furthermore, we design and implement a privacy protection protocol based on attributed-based encryptions. To demonstrate its feasibility, the protocol is adopted in several kinds of demand response programs. Real-world experiments show that our scheme merely incurs a substantially light overhead, but can address the formidable privacy challenges that customers are facing in demand response systems.
Keywords: cryptographic protocols; data privacy; power system reliability; smart power grids; P2DR; attributed-based encryptions; customer personal information; financial rationality verse inconvenience; operational data; power usage; privacy protection protocol; privacy threat; privacy-preserving demand response system; reliable electric power system; smart grids; substantially light overhead; supply demand balance; Control systems; Data privacy; Encryption; Load management; Protocols; Consumer privacy; Demand Response; Privacy Preservation; Smart Grids (ID#: 15-6370)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785302&isnumber=6785290

 

Brettschneider, Daniel; Toenjes, Ralf; Roer, Peter; Hoelker, Daniel, “Distributed Algorithm for Energy Management in Smart Grids,” WTC 2014; World Telecommunications Congress 2014; Proceedings of, vol., no., pp. 1, 6, 1-3 June 2014. doi:(not provided)
Abstract: The German energy turnaround results in a trend towards an increasing amount of renewable energy sources. Together with technological innovations of novel producers, consumers and storages, power grids are facing great challenges, e.g. compensation of supply fluctuations or efficient control of consumers. Smart grids promise to overcome these drawbacks. In this paper we present an algorithm for energy management in smart grids on a street level, e.g. participating consumers are connected to one local power transformer, based on well-known scheduling algorithms. The algorithm controls the shiftable and adaptable demand of smart homes that are connected via a distributed middleware. The results show that the algorithm can reduce or even eliminate the stated problems and offers privacy and robustness.
Keywords:  (not provided) (ID#: 15-6371)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840002&isnumber=6839998

 

Hansen, J.; Knudsen, J.; Annaswamy, A.M., “Demand Response in Smart Grids: Participants, Challenges, and a Taxonomy,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 4045, 4052, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7040018
Abstract: In recent decades, moves toward higher integration of Renewable Energy Resources have called for fundamental changes in both the planning and operation of the overall power grid. One such change is the incorporation of Demand Response (DR), the process by which consumers can adjust their demand in a flexible manner. This paper presents a survey of various aspects of DR including the different types of participants, as well as the underlying challenges and the overall potential of DR when it comes to large-scale implementations. Benefits of DR as reported in the literature for performance metrics such as frequency control and price control, as well as methods for ensuring privacy are discussed. A quantitative taxonomy of DR recently proposed in the literature based on the inherent magnitude, run-time, and integral constraints is discussed and its integration with economic dispatch is explored.
Keywords: demand side management; smart power grids; DR; demand response; economic dispatch; frequency control; performance metrics; price control; smart grids; Batteries; Buildings; Electric potential; Electricity; Frequency control; Power grids; Reliability (ID#: 15-6372)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7040018&isnumber=7039338

 

Junggab Son; Donghyun Kim; Sejin Lee; Heekuck Oh; Tokuta, A.; Melikyan, H., “Trade-off between Service Granularity and User Privacy in Smart Meter Operation,” Mobile Ad-hoc and Sensor Networks (MSN), 2014 10th International Conference on, vol., no., pp. 288, 293, 19-21 Dec. 2014. doi:10.1109/MSN.2014.46
Abstract: The term “smart grid” refers to the next generation power supply system. A smart meter, an essential component of the grid system, is installed at each housing unit and acts as an agent for the unit. While the smart meter is a key enabler of great opportunities and conveniences in smart grid, it is susceptible to various cyber-security attacks, especially privacy invasion from electricity providers. Trusted third party (TTP) and homomorphic encryption are two favorite tools to deal with this issue in the literature. Unfortunately, the use of TTP does not completely eliminate the privacy risk. On the other hand, the use of homomorphic encryption makes it harder for the providers to support various services whose demand can be highly diversified. In this paper, we introduce a drastically new approach to deal with the consumer privacy issue in smart grid. Our key idea is let each consumer to determine the frequency of the measurement report. In this way, each consumer can responsibly make a trade-off between the level of privacy preservation with the quality of the services it will receive.
Keywords: cryptography; power system security; smart meters; smart power grids; TTP; consumer privacy issue; cyber-security attacks; electricity providers; grid system; homomorphic encryption; housing unit; next generation power supply system; privacy invasion; privacy preservation; service granularity; service quality; smart grid; smart meter operation; trusted third party; Electricity; Encryption; Privacy; Real-time systems; Smart grids; Smart grid; service granularity; smart meter; user privacy (ID#: 15-6373)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7051783&isnumber=7051734

 

Aman, S.; Chelmis, C.; Prasanna, V., “Addressing Data Veracity in Big Data Applications,” Big Data (Big Data), 2014 IEEE International Conference on, vol., no., pp. 1, 3, 27-30 Oct. 2014. doi:10.1109/BigData.2014.7004473
Abstract: Big data applications such as in smart electric grids, transportation, and remote environment monitoring involve geographically dispersed sensors that periodically send back information to central nodes. In many cases, data from sensors is not available at central nodes at a frequency that is required for real-time modeling and decision-making. This may be due to physical limitations of the transmission networks, or due to consumers limiting frequent transmission of data from sensors located at their premises for security and privacy concerns. Such scenarios lead to partial data problem and raise the issue of data veracity in big data applications. We describe a novel solution to the problem of making short term predictions (up to a few hours ahead) in absence of real-time data from sensors in Smart Grid. A key implication of our work is that by using real-time data from only a small subset of influential sensors, we are able to make predictions for all sensors. We thus reduce the communication complexity involved in transmitting sensory data in Smart Grids. We use real-world electricity consumption data from smart meters to empirically demonstrate the usefulness of our method. Our dataset consists of data collected at 15-min intervals from 170 smart meters in the USC Microgrid for 7 years, totaling 41,697,600 data points.
Keywords: Big Data; power engineering computing; smart power grids; Big Data applications; USC Microgrid; communication complexity; data veracity; electricity consumption data; remote environment monitoring; sensory data transmission; smart electric grids; transportation; Big data; Data models; Intelligent sensors; Predictive models; Real-time systems; Smart meters; prediction model; smart grid (ID#: 15-6374)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004473&isnumber=7004197

 

Lei Yang; Xu Chen; Junshan Zhang; Poor, H.V., “Optimal Privacy-Preserving Energy Management for Smart Meters,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 513, 521, April 27 2014–May 2 2014. doi:10.1109/INFOCOM.2014.6847975
Abstract: Smart meters, designed for information collection and system monitoring in smart grid, report fine-grained power consumption to utility providers. With these highly accurate profiles of energy usage, however, it is possible to identify consumers' specific activity or behavior patterns, thereby giving rise to serious privacy concerns. In this paper, this concern is addressed by using battery energy storage. Beyond privacy protection, batteries can also be used to cut down the electricity bill. From a holistic perspective, a dynamic optimization framework is designed for consumers to strike a tradeoff between the smart meter data privacy and the electricity bill. In general, a major challenge in solving dynamic optimization problems lies in the need of the knowledge of the future electricity consumption events. By exploring the underlying structure of the original problem, an equivalent problem is derived, which can be solved by using only the current observations. An online control algorithm is then developed to solve the equivalent problem based on the Lyapunov optimization technique. To overcome the difficulty of solving a mixed-integer nonlinear program involved in the online control algorithm, the problem is further decomposed into multiple cases and the closed-form solution to each case is derived accordingly. It is shown that the proposed online control algorithm can optimally control the battery operations to protect the smart meter data privacy and cut down the electricity bill, without the knowledge of the statistics of the time-varying load requirement and the electricity price processes. The efficacy of the proposed algorithm is demonstrated through extensive numerical evaluations using real data.
Keywords: data privacy; integer programming; nonlinear programming; power consumption; smart meters; smart power grids; Lyapunov optimization technique; battery energy storage; dynamic optimization framework; dynamic optimization problems; electricity bill; electricity consumption; information collection; mixed-integer nonlinear program; online control algorithm; optimal privacy-preserving energy management; power consumption; privacy protection; smart grid; smart meter data privacy; time-varying load requirement; Algorithm design and analysis; Batteries; Data privacy; Electricity; Optimization; Privacy; Smart meters; Battery; Cost Saving; Data Privacy; Load Monitor; Smart Grid; Smart Meter (ID#: 15-6375)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847975&isnumber=6847911

 

Prasad, R.S.; Semwal, S., “Multi Point Sensing (MPS): A Solution for Resolving Complexity in NIALM Applications for Indian Domestic Consumers,” Power Electronics, Drives and Energy Systems (PEDES), 2014 IEEE International Conference on, vol., no., pp. 1, 6, 16-19 Dec. 2014. doi:10.1109/PEDES.2014.7042066
Abstract: Government of India has decided to install smart meters in fourteen states. Smart meters are required to identify home appliances to fulfill various tasks in the smart grid environment. Both intrusive and non-intrusive methods have been suggested for identification. However, intrusive method is not suitable for cost and privacy reasons. On the other hand, techniques using non-intrusive appliance load monitoring (NIALM) are yet to result in meaningful practical implementation. Two major challenges in NIALM research are the choice of features (load signatures of appliances), and the appropriate algorithm. Both have a direct impact on the cost of the smart meter. In this paper, we address the two issues and propose a procedure with only four features and a simple algorithm to identify appliances. Our experimental setup, on the recommended specifications of the internal electrical wiring in Indian residences, used common household appliances' load signatures of active and reactive powers, harmonic components and their magnitudes. We show that these four features are essential and sufficient for implementation of NIALM with a simple algorithm. We have introduced a new approach of ‘multi point sensing’ and ‘group control’ rather than the ‘single point sensing’ and ‘individual control’, used so far in NIALM techniques.
Keywords: demand side management; smart meters; smart power grids; Indian domestic consumers; Indian residences; MPS; NIALM applications; group control; harmonic components; household appliances; internal electrical wiring; load signatures; multipoint sensing; nonintrusive appliance load monitoring; reactive powers; smart meter; Feature extraction; Harmonic analysis; Home appliances; Monitoring; Reactive power; Sensors; Smart meters; Demand Response Management; Load signature; NIALM; Smart grid; Smart meter (ID#: 15-6376)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042066&isnumber=7041944

 

Alamatsaz, N.; Boustani, A.; Jadliwala, M.; Namboodiri, V., “AgSec: Secure and Efficient CDMA-Based Aggregation for Smart Metering Systems,” Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, vol., no., pp. 489, 494, 10-13 Jan. 2014. doi:10.1109/CCNC.2014.6866615
Abstract: Security and privacy concerns in the future power grid have recently received tremendous focus from security advocates. Most existing security mechanisms utilize cryptographic techniques that are computationally expensive and bandwidth intensive. However, aggregating the large outputs of these cryptographic algorithms has not been considered thoroughly. Smart Grid Networks (SGN) generally have limitations on bandwidth, network capacity and energy. Hence, utilizing data aggregation algorithms, the limited bandwidth can be efficiently utilized. Most of the aggregation algorithms use statistical functions such as minimum, maximum, and average before transmitting data over the network. Existing aggregation algorithms, in SGNs, are generally expensive in terms of communication overhead, processing load and delay. However, our proposed CDMA-based data aggregation method provides access to all the data of all the smart meters in the root node, which in this case is the Utility Center, while keeping the smart metering data secure. The efficiency of the proposed method is confirmed by mathematical analysis.
Keywords: code division multiple access; cryptography; mathematical analysis; smart meters; smart power grids; telecommunication security; AgSec; SGN; communication overhead; cryptographic techniques; data aggregation; efficient CDMA; mathematical analysis; network capacity; network energy; power grid; privacy concerns; secure CDMA; security concerns; smart grid networks; smart metering systems; statistical functions; utility center; Cryptography; Delays; Multiaccess communication; Protocols; Smart grids; Smart meters (ID#: 15-6377)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866615&isnumber=6866537

 

Peer, C.D.; Engel, D.; Wicker, S.B., “Hierarchical Key Management for Multi-Resolution Load Data Representation,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 926, 932, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007767
Abstract: It has been shown that information about a consumer's actions, beliefs and preferences can be extracted from high resolution load data. This information can be used in ways that violate consumer privacy. In order to increase consumer control over this information, it has been suggested that load data be represented in multiple resolutions, with each resolution secured with a different key. To make this approach work in the real-world, a suitable key management needs to be employed. In this paper, we consider a combination of multi-resolution load data representation with hierarchical key management. Emphasis is placed on a privacy-aware design that gives the end-user the freedom to decide which entity is allowed to access user related data and at what granularity.
Keywords: data structures; load management; power generation control; smart power grids; consumer actions; consumer control; consumer privacy; hierarchical key management; high resolution load data; multiresolution load data representation; privacy-aware design; Encryption; Privacy; Smart grids; Smart meters; Wavelet transforms (ID#: 15-6378)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007767&isnumber=7007609

 

Yesudas, R.; Clarke, R., “Identifying Consumer Requirements as an Antidote to Resistance to Smart Meters,” Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), 2014 IEEE PES, vol., no., pp. 1, 6, 12-15 Oct. 2014. doi:10.1109/ISGTEurope.2014.7028789
Abstract: Energy efficiency has been the primary motivation for the introduction of smart meters. But current smart metering projects are facing barriers to adoption from consumers, arising from the failure of project sponsors to understand consumers and their requirements. Consumers view smart meters with suspicion, perceiving them to be energy suppliers' efforts to maximise their profits at the expense of consumer costs, choice, health and privacy. For emergent systems like automated metering infrastructure (AMI) to avoid battling to convince consumers of their benefits, it is essential to have user-centric analysis performed before expensive infrastructures are designed and deployed. Various categories of consumers will have their own particular perspectives, and different expectations about how the system should help them to appropriately manage their energy usage. Hence it is essential to segment energy consumers and identify the requirements for each group. In this paper we look at a number of user-centric methods. We then analyse the effectiveness of combining Contextual Design (CD), focus groups and problem extraction to provide insights into energy consumer needs. Based on the analysis we outline a functional specification for a smart meter that would satisfy the energy requirements for a segment of electricity consumer with medical needs.
Keywords: energy conservation; metering; smart meters; AMI; CD; automated metering infrastructure; consumer costs; consumer requirements; contextual design; electricity consumer; energy consumers; energy efficiency; smart meters; user-centric analysis; Business; Context; Electricity; Interviews; Relays; Smart grids; Smart meters; Contextual Design; User centered design; consumer segments; requirement elicitation; smart meter (ID#: 15-6379)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028789&isnumber=7028730

 

Azghandi, S.; Hopkinson, K.M.; McTasney, R.J., “An Empirical Model for Smart Meters Using Data Security,” Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES, vol., no., pp. 1, 5, 19-22 Feb. 2014. doi:10.1109/ISGT.2014.6816417
Abstract: Consumer concern regarding the privacy of their electric power usage behavior has been a major sticking point, disrupting utility fielding of smart-meters in many municipalities and regional service areas. Securing power meter readings in a way that addresses these privacy issues would alleviate public concerns and facilitate the implementation of an Advanced Metering Infrastructure (AMI). This paper proposes an empirical secure data transmission model by examining the parameters that affect the required time to transmit secured data for a network of smart meters and collectors. In this paper, the data security is accomplished using Partial Homomorphic Encryption (PHE), and the transmission of data is facilitated by configuring the smart meters and collectors hierarchically. A case study compares PHE simulation program execution times running on various Advanced RISC Machine-based (ARM-based) boards and virtual machines to determine the efficiency by which the smart meters meet a reasonable meter reading polling time for a service area.
Keywords: cryptography; data privacy; power system security; reduced instruction set computing; smart meters; virtual machines; AMI; ARM-based board; PHE simulation program; advanced RISC machine-based board; advanced metering infrastructure; electric power usage behavior; empirical secure data transmission model; municipality service area; partial homomorphic encryption; power meter reading security; regional service area; secured data transmission; smart meter; virtual machine; Clocks; Computational modeling; Data communication; Data models; Encryption; ARM-based boards; Data privacy; Efficiency; Hierarchical networks; Partial Homomorphic Encryption; Smart meters (ID#: 15-6380)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816417&isnumber=6816367
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Encryption Audits, 2014

 

 
SoS Logo

Encryption Audits

2014


Encryption audits not only test the validity and effectiveness of protection schemes, they also potentially provide data for developing and improving metrics about data security. The works cited here were presented in 2014.



Lopez, J.M.; Ruebsamen, T.; Westhoff, D., “Privacy-Friendly Coud Audits with Somewhat Homomorphic and Searchable Encryption,” Innovations for Community Services (I4CS), 2014 14th International Conference on, vol., no., pp. 95, 103, 4-6 June 2014. doi:10.1109/I4CS.2014.6860559
Abstract: In this paper, we provide privacy enhancements for a software agent-based audit system for clouds. We also propose a general privacy enhancing cloud audit concept which, we do present based on a recently proposed framework. This framework introduces the use of audit agents for collecting digital evidence from different sources in cloud environments. Obviously, the elicitation and storage of such evidence leads to new privacy concerns of cloud customers, since it may reveal sensitive information about the utilization of cloud services. We remedy this by applying Somewhat Homomorphic Encryption (SHE) and Public-Key Searchable Encryption (PEKS) to the collection of digital evidence. By considering prominent audit event use cases we show that the amount of cleartext information provided to an evidence storing entity and subsequently to a third-party auditor can be shaped in a good balance taking into account both, i) the customers' privacy and ii) the fact that stored information may need to have probative value. We believe that the administrative domain responsible for an evidence storing database falls under the adversary model “honest-but-curious“ and thus should perform query responses from the auditor with respect to a given cloud audit use case by purely performing operations on encrypted digital evidence data.
Keywords: cloud computing; public key cryptography; software agents; PEKS; SHE; cloud services; privacy-friendly cloud audits; public-key searchable encryption; searchable encryption; software agent-based audit system; somewhat homomorphic encryption; third-party auditor; Encryption; IP networks; Monitoring; Privacy; Public key; Audit; Cloud Computing; Computing on Encrypted Data; Evidence; Searchable Encryption; Somewhat Homomorphic Encryption (ID#: 15-6005)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860559&isnumber=6860533

 

Kun-Lin Tsai; Jiu-Soon Tan; Fang-Yie Leu; Yi-Li Huang, “A Group File Encryption Method Using Dynamic System Environment Key,” Network-Based Information Systems (NBiS), 2014 17th International Conference on, vol., no., pp. 476, 483, 10-12 Sept. 2014. doi:10.1109/NBiS.2014.22
Abstract: File encryption is an effective way for an enterprise to prevent its data from being lost. However, the data may still be deliberately or inadvertently leaked out by the insiders or customers. When the sensitive data are leaked, it often results in huge monetary damages and credit loss. In this paper, we propose a novel group file encryption/decryption method, named the Group File Encryption Method using Dynamic System Environment Key (GEMS for short), which provides users with auto crypt, authentication, authorization, and auditing security schemes by utilizing a group key and a system environment key. In the GEMS, the important parameters are hidden and stored in different devices to avoid them from being cracked easily. Besides, it can resist known-key and eavesdropping attacks to achieve a very high security level, which is practically useful in securing an enterprise's and a government's private data.
Keywords: authorisation; business data processing; cryptography; file organisation; message authentication; GEMS; auditing security scheme; authentication; authorization; autocrypt; decryption method; dynamic system environment key; eavesdropping attack; group file encryption; security level; Authentication; Cloud computing; Computers; Encryption; Servers; DRM; group file encryption; security; system environment key (ID#: 15-6006)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023997&isnumber=7023898

 

Sumalatha, M.R.; Hemalathaa, S.; Monika, R.; Ahila, C., “Towards Secure Audit Services for Outsourced Data in Cloud,” Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, vol., no., pp. 1, 6, 10-12 April 2014. doi:10.1109/ICRTIT.2014.6996214
Abstract: The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.
Keywords: auditing; authorisation; cloud computing; cryptography; data protection; outsourcing; storage management; auditing methods; auditing systems requirement; authorized users; cloud security strategies; cloud storage; data encryption; data maintenance; data outsourcing; data owner identity; local data storage; privacy-preserving data integrity protection; public auditing; secure audit services; security hazards; security implications; third party service provider; Authentication; Cloud computing; Encryption; Information technology; Public key; Audit service; Cloud storage; Identity; Integrity; Privacy (ID#: 15-6007)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996214&isnumber=6996087

 

Bei Pei; Changsong Chen; Changsheng Wan, “A XOR Based Public Auditing Scheme for Proof-of-Storage,” Broadband and Wireless Computing, Communication and Applications (BWCCA), 2014 Ninth International Conference on, vol., no., pp. 558, 565, 8-10 Nov. 2014. doi:10.1109/BWCCA.2014.140
Abstract: Public auditing has vital significance in cloud computing. However, current public auditing schemes are bilinear map based, and they are costly. This paper brings out an XOR based public auditing scheme, which is much more efficient than current bilinear map based schemes.
Keywords: auditing; cloud computing; security of data; XOR-based public auditing scheme; bilinear map; proof-of-storage; Authentication; Cloud computing; Encryption; Materials; Protocols; Servers; XOR; Publicly Auditable; Proof-of-storage (ID#: 15-6008)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016135&isnumber=7015998

 

Rewadkar, D.N.; Ghatage, S.Y., “Cloud Storage System Enabling Secure Privacy Preserving Third Party Audit,” Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, vol., no., pp. 695, 699, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6993049
Abstract: Cloud computing is a revolutionary new approach to how computing services are produced and consumed. It is an abstraction of the concept of pooling resources and presenting them as virtual resources. Using cloud computing resources, data, computations, and services can be shared over scalable network of nodes; these nodes may represent the datacenters, end user computers and web services. On the same note cloud storage refers to storing the data on a remote storage located at other organization's infrastructure. The data storage is maintained and managed by the organization; the user will pay for the storage space which is used. Outsourcing data ultimately relinquishes the control of data from user and the fate of data is in control of the cloud server. As the data is stored on cloud server, the storage correctness of data is put on risk. The cloud server is managed by cloud service provider which is a different administrative entity, so ensuring the data integrity is of prime importance. This article studies the problems of ensuring data storage correctness and proposes an efficient and secure method to address these issues. A third party auditor is introduced securely, who will on behalf of users request will periodically verify the data integrity of the data stored on cloud server. There will not be any online burden on user and security of data will be maintained as the data will not be shared directly with the third party auditor. A homomorphic encryption scheme is used to encrypt the data which will be shared with the TPA. The results can be further extended to enable the third party auditor to do multiple auditing.
Keywords: cloud computing; cryptography; data integrity; data privacy; storage management;  cloud server; cloud storage system; data storage correctness; homomorphic encryption scheme; pooling resources; secure privacy preserving TPA; secure privacy preserving third party auditor; security of data; virtual resources; Cloud computing; Encryption; Privacy; Secure storage; Servers; Cloud Storage; Data integrity; ElGamal; Homomorphic encryption; Third Party Auditing (ID#: 15-6009)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993049&isnumber=6992918

 

Cindhamani, J.; Punya, N.; Ealaruvi, R.; Dhinesh Babu, L.D., “An Enhanced Data Security and Trust Management Enabled Framework for Cloud Computing Systems,” Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, vol., no., pp. 1, 5, 11-13 July 2014. doi:10.1109/ICCCNT.2014.6963097
Abstract: Cloud computing is an emerging and advanced technology in IT enterprise which provides services on demand. Cloud computing includes many advantages such as flexibility, improved performance and low cost. Besides its advantages, cloud has many security issues and challenges. In this paper, we propose an enhanced frame work for data security in cloud which follows the security polices such as integrity, confidentiality and availability. The data is stored in cloud by using 128 bit encryption and RSA algorithm, then we use the trust management i.e., Trusted Party Auditor (TPA) which audits the data instead of client. Thus, we show how efficiently the data can be secured related to performance analysis.
Keywords: cloud computing; data integrity; public key cryptography; trusted computing; 128 bit encryption; RSA algorithm; TPA; cloud computing systems; data availability; data confidentiality; enhanced data security; security polices; trust management enabled framework; trusted party auditor; Algorithm design and analysis; Authentication; Cloud computing; Encryption;128 bit encryption; RSA algorithm; TPA (ID#: 15-6010)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963097&isnumber=6962988

 

Garkoti, G.; Peddoju, S.K.; Balasubramanian, R., “Detection of Insider Attacks in Cloud Based e-Healthcare Environment,” Information Technology (ICIT), 2014 International Conference on, vol., no., pp. 195, 200, 22-24 Dec. 2014. doi:10.1109/ICIT.2014.43
Abstract: In recent years, Cloud computing has been receiving great attention from various business and research organizations as it promises to provide large storage facilities and highly managed remote services. Due to its characteristics like on-demand self service, rapid elasticity, ubiquitous network access and resource pooling, it shows high potential for providing e-Healthcare solutions. It can offer various financial and functional benefits to e-Healthcare which includes providing storage flexibility for the rapidly growing healthcare data, reduced cost, better accessibility, improved quality of care and enhancement in medical research. However at the same time, it faces many technical challenges like privacy, reliability, security etc. In the Cloud based ehealthcare environment where the patient's data is transferred between entities, maintaining the security of data becomes a priority. Cryptographic techniques can only provide a secure channel of communication but it fails to provide security at end points. Security attacks may be accomplished by the malicious insider at the end points. A malicious insider may modify the patient's data resulting in a false examination. The paper provides a detective approach for such attacks in the healthcare organizations. Our work is focused with the detection of insider attacks for preventing false examination of patient's health records and assuring the accountability of data usage. Watermarking can be used for detection of modification by an insider attack but does not provide accountability of data usage. Hence our approach combines the functionalities of cryptographic techniques and watermarking together with an accountability framework for providing transparency of patient's data usage.
Keywords: cloud computing; cryptography; electronic health records; health care; watermarking; cloud based e-healthcare environment; cryptographic techniques; data usage accountability; malicious insider attack detection; medical research; on-demand self service; patient health records; remote services; research organizations; resource pooling; secure communication channel; ubiquitous network access; Cloud computing; Medical diagnostic imaging; Medical services; Organizations; Watermarking; Cloud; audit; encryption; medical images; security (ID#: 15-6011)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033321&isnumber=7033273

 

Sarralde, Javier Lopez; Yarza, Jose Miguel, “Cyber Security Applied to P&C IEDs,” T&D Conference and Exposition, 2014 IEEE PES, vol., no., pp. 1, 5, 14-17 April 2014. doi:10.1109/TDC.2014.6863537
Abstract: This paper highlights basic cyber security features that Protection and Control Intelligent Electronic Devices (P&C IEDs) should implement considering the current cyber security standardization efforts. Although it can be said that all functional aspects regarding the securization of IEDs are covered by these standards, there are still some gaps or lack of definition. Currently, it's difficult to install IEDs from different manufacturers within the same cyber security system. This paper emphasizes those aspects requiring additional definition and implementation of interoperability.
Keywords: Audit Trail; Authentication; Authorization; Centralized; Cyber Security; Encryption; IEC 62351; IEEE 1686; NERC CIP; P&C IED
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863537&isnumber=6863147

 

Miyoung Jang; Min Yoon; Deulnyeok Youn; Jae Woo Chang, “Clustering-Based Query Result Authentication for Encrypted Databases in Cloud,” High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC, CSS, ICESS), 2014 IEEE Intl Conf on, vol., no., pp. 1076, 1082, 20-22 Aug. 2014. doi:10.1109/HPCC.2014.181
Abstract: Due to advancement in cloud computing technology, the research on the outsourced database has been spotlighted. Consequently, it is becoming more important to guarantee the correctness and completeness of query result in this environment. The existing data encryption schemes do not consider data distribution when encrypting original data. And existing query result integrity methods have limitation of verification object transmission overheads. To resolve these problems, we propose a clustering-based data transformation technique and a privacy-aware query authentication index. Our clustering-based data transformation scheme is designed to select anchors based on data distribution. For the integrity of query results, our query result authentication index stores an encrypted signature for each anchor and compares the anchor signature with the verification data from the data owner. Through performance evaluation, we show that our method outperforms the existing method up to 15 times in terms of query processing time and verification.
Keywords: cloud computing; cryptography; digital signatures; pattern clustering; query processing; anchor signature; cloud computing technology; clustering-based data transformation technique; clustering-based query result authentication; data distribution; data encryption schemes; encrypted databases; encrypted signature; query processing time; query result authentication index; verification data; verification object transmission overheads; Authentication; Data structures; Encryption; Indexes; Query processing; Database outsourcing; database transformation technique; hash-based signature index; query result integrity auditing method (ID#: 15-6012)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056877&isnumber=7056577

 

Durga Priya, G.; Prathibha, S., “Assuring Correctness for Securing Outsourced Data Repository in Cloud Environment,” Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, vol., no., pp. 1745, 1748, 8-10 May 2014. doi:10.1109/ICACCCT.2014.7019407
Abstract: The data storage in the cloud environment offers users with infrastructure affability, quicker deployment of applications and data, cost effective, acclimation of cloud resources to real needs, enhanced productivity, etc. Inspite of these beneficial factors, there are several disadvantages to the widespread adoption of cloud computing remain. Among them, surity towards the exactness of the outsourced data and matter of concealment takes the major part. In order to avoid a security hazard for the outsourced data, we propose the dynamic audit services that enable integrity verification of data. An Interactive Proof System (IPS) is introduced to protect the privacy of the data. The DataOwner stores the large number of data in the cloud after encrypting the data for auditing purpose. An Authorized Application (AA), manipulates the outsourced data and the AA helps the cloud users to access the services. Our system provides secure auditing while the data owner, outsourcing the data in the cloud. And after performing auditing operations, security solutions are enhanced for the purpose of detecting malicious users with the help of Certificate Authority, using the hash values and a TimeStamp.
Keywords: authorisation; cloud computing; data integrity; data privacy; DataOwner; IPS; TimeStamp; authorized application; certificate authority; cloud computing environment; data encryption; data integrity verification; data storage; dynamic audit services; hash values; interactive proof system; outsourced data repository security; Cryptography; Data privacy; Audit service; Certificate Authority; Data security; Dynamic operations; Hash Verification; Time stamp (ID#: 15-6013)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019407&isnumber=7019129

 

Miao Yingkai; Chen Jia, “A Kind of Identity Authentication under Cloud Computing Environment,” Intelligent Computation Technology and Automation (ICICTA), 2014 7th International Conference on, vol., no., pp. 12, 15, 25-26 Oct. 2014. doi:10.1109/ICICTA.2014.10
Abstract: An identity authentication scheme is proposed combining with biometric encryption, public key cryptography of homomorphism and predicate encryption technology under the cloud computing environment. Identity authentication scheme is proposed based on the voice and homomorphism technology. The scheme is divided into four stages, register and training template stage, voice login and authentication stage, authorization stage, and audit stage. The results prove the scheme has certain advantages in four aspects.
Keywords: authorisation; cloud computing; public key cryptography; audit stage; authorization stage; biometric encryption; cloud computing environment; encryption technology; homomorphism technology; identity authentication scheme; public key cryptography; register and training template stage; voice login and authentication stage; voice technology; Authentication; Cloud computing; Encryption; Servers; Spectrogram; Training; homomorphism; identity authentication (ID#: 15-6014)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7003473&isnumber=7003462

 

Yen-Hung Kuo; Tzu-Wei Yeh; Guang-Yan Zheng; Jyun-Kai Wu; Chao-Chin Yang; Jia-Ming Lin, “Open Stack Secure Enterprise File Sync and Share Turnkey Solution,” Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, vol., no., pp. 1015, 1020, 15-18 Dec. 2014. doi:10.1109/CloudCom.2014.17
Abstract: The Enterprise File Sync and Share (EFSS) is one of the most important services to provide enterprises' employees with cloud file sync, share, and collaboration services. To take enterprises' concerns into account, such as security, privacy, compliance, and regulation, the existing EFSS solutions are either using private (on-premise) or hybrid cloud service model to provide their services. They usually emphasize that files stored in the solutions are encrypted on transfer and at rest and events occurred in the service are logged as the audit trail. However, support of data encryption and audit trail are not capable of protecting enterprise sensitive data from not well addressed security issues of the EFSS service. The security issues, including employee privacy protection, management of share links and synchronized cloud files, and the secure enterprise directory integration, are pointed out in this article. To address these issues, this work proposes and develops a scalable Secure EFSS service which can be deployed on the on-premise Open Stack cloud infrastructure to securely provide employees with EFSS service. Designs of an integrated security approach are introduced in this article, including data and metadata isolations, Distinct Share Link utility, encryption key management for personal and shared files, sandbox-based cloud file synchronization, and out-of-band authentication method.
Keywords: cloud computing; security of data; data encryption; employee privacy protection; encryption key management; integrated security approach; open stack secure enterprise file sync and share turnkey Solution; out-of-band authentication method; sandbox-based cloud file synchronization; scalable secure EFSS service; secure enterprise directory integration; security issues; share link utility; share links; synchronized cloud files; Authentication; Databases; Encryption; File systems; Synchronization; Open Stack; enterprise file sync and share; security (ID#: 15-6015)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037799&isnumber=7036227

 

Jin Li; Xiaofeng Chen; Xhafa, F.; Barolli, L., “Secure Deduplication Storage Systems with Keyword Search,” Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, vol., no., pp. 971, 977, 13-16 May 2014. doi:10.1109/AINA.2014.118
Abstract: Data deduplication is an attractive technology to reduce storage space and upload bandwidth for increasing vast amount of duplicated and redundant data. In a cloud storage system with data deduplication, duplicate copies of data will be eliminated and only one copy will be kept in the storage. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. However, the issue of keyword search over encrypted data in deduplication storage system has to be addressed for efficient data utilization. This paper firstly proposes two constructions which support secure keyword search in this scenario. In these constructions, the integrity of the data can be realized by just checking the convergent key, without other traditional integrity auditing mechanisms. Security analysis demonstrates that our keyword search schemes are secure in terms of the definitions specified in the proposed security model.
Keywords: cloud computing; cryptography; data compression; data integrity; secure storage; cloud storage system; convergent key; data deduplication; data utilization; encryption technique; integrity auditing mechanisms; secure deduplication storage systems; secure keyword search; security analysis; storage space; Cloud computing; Encryption; Indexes; Keyword search; Servers; Deduplication; distributed storage system; reliability; secret sharing (ID#: 15-6016)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838769&isnumber=6838626

 

Rathanam, G.J.; Sumalatha, M.R., “Dynamic Secure Storage System in Cloud Services,” Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, vol., no., pp. 1, 5, 10-12 April 2014. doi:10.1109/ICRTIT.2014.6996175
Abstract: Nowadays storage systems are now exposed to wide numbers of threat while handling the information in cloud service. Therefore we design a secured storage system for ensuring security and dynamic operation in the environment. The data is stored in the server using dynamic data operation with partitioning method. Improved Adaptive Huffman Technique and Improved RSA Double Encryption Technique also used which enables the user to access process in a secure manner and efficient way. The system does a verification to prevent the loss of data and ensures security with storage integrity method. An efficient distributed storage auditing mechanism is implemented to overcome the limitations in handling the data loss. Security in this service enforces error localization and easy detection of misbehaving server. In nature the data are dynamic in cloud service; hence this process aims to store the data with reduced computational cost, space and time consumption.
Keywords: cloud computing; public key cryptography; storage management; RSA double encryption technique; adaptive Huffman technique; cloud services; distributed storage auditing mechanism; dynamic data operation; dynamic secure storage system; partitioning method; storage integrity method; Cloud computing; Encryption; Secure storage; Servers; Spread spectrum communication; Vegetation; Data Security; Data Storage; Huffman Technique; Partitioning; RSA Technique; Ternary Tree (ID#: 15-6017)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996175&isnumber=6996087

 

Balamurugan, B.; Venkata Krishna, P.; Rajya, L.G.V.; Saravana Kumar, N., “Layered Storage Architecture for Health System Using Cloud,” Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, vol., no., pp. 1795, 1800, 8-10 May 2014. doi:10.1109/ICACCCT.2014.7019419
Abstract: Cloud computing is a paradigm shift from traditional computing offers services that can use ubiquitous internet to transmit data and other functionalities. The health care plays a vital role involved in mundane activities of human life. In proposed work, the efficient framework for health care system using Cloud is achieved through the segmentation algorithm and deployed. Albeit, the layered design of data storage is well-organized for huge critical data and developed with high level security and access control. The framework overcomes the impact created by attacks withstand along with security and privacy flaws of the Cloud. The integration and data sharing of hospital information is made possible using hybrid Cloud. Our framework utilizes the algebraic way of data possession for Cloud auditing those results is cost effective method for overall health care systems with high level of security standards.
Keywords: auditing; authorisation; cloud computing; data privacy; medical information systems; standards; ubiquitous computing; access control; cloud auditing; data possession; health care system; high level security; hospital information data sharing; human life; hybrid cloud; layered data storage design; layered storage architecture; mundane activities; paradigm shift; privacy flaws; security flaws; security standards; segmentation algorithm; traditional computing; ubiquitous internet; Encryption; Hospitals; Servers; Standards; Cloud; Cryptology; Data storage system; Information security; Storage area networks; privacy (ID#: 15-6018)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019419&isnumber=7019129

 

Albakoush, Y.A.; Ismail, R.; Abu Bakar, A., “A Hybrid Architecture for Database Intrusion Preventer,” Information Technology and Multimedia (ICIMU), 2014 International Conference on, vol., no., pp. 21, 26, 18-20 Nov. 2014. doi:10.1109/ICIMU.2014.7066597
Abstract: Database management systems come with several security mechanisms such as access control, encryption and auditing. These mechanisms are very important to protect databases against various threats. However, such mechanisms may not be sufficient in dealing with database intrusion. Therefore, the prevention of any intrusion attempts as well as detecting them pose an important research issue. Despite the proposal of many techniques previously, the design and implementation of database reliable intrusion detection or prevention systems remains a substantial demand and a vital research topic. In this paper, a Hybrid Architecture for Database Intrusion Preventer (HyDBIP) has been proposed. The proposed system comprises of Signature-based, Anomaly-based and Anomaly Query Classifier (AQC) models work together to complement each other.
Keywords: SQL; authorisation; database management systems; digital signatures; pattern classification; AQC models; HyDBIP; access control; anomaly query classifier; anomaly-based classifier; auditing; database intrusion preventer; database management systems; database protection; encryption; hybrid architecture; intrusion detection; intrusion prevention systems; security mechanisms; signature-based classifier; Database systems; Information technology; Intrusion detection; Sensitivity; Servers; Anomly-based; Database Security; Intrusion Detection; Intrusion Prevention; SQL Injection; Signature-based (ID#: 15-6019)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066597&isnumber=7066586

 

Yong Wang; Vangury, K.; Nikolai, J., “MobileGuardian: A Security Policy Enforcement Framework for Mobile Devices,” Collaboration Technologies and Systems (CTS), 2014 International Conference on,  vol., no., pp. 197, 202, 19-23 May 2014. doi:10.1109/CTS.2014.6867564
Abstract: Mobile devices such as smartphones and tablets are widely used for personal and business uses. Compared to personal mobile subscribers, enterprises have more concerns about mobile device security. The challenges an enterprise may face include unlimited access to corporate resources, lack of encryption on corporate data, unwillingness to backup data, etc. Many of these issues have been resolved by auditing and enforcing security policies in enterprise networks. However, it is difficult to audit and enforce security policies on mobile devices. A substantial discrepancy exists between enterprise security policy administration and security policy enforcement. In this paper, we propose a framework, MobileGuardian, for security policy enforcement on mobile devices. Security policy enforcement is further divided into four issues, i.e., sensitive data isolation, security policy formulation, security policy testing, and security policy execution. The proposed framework is secure, flexible, and scalable. It can be adopted on any mobile platforms to implement access control, data confidentiality, security, and integrity.
Keywords: mobile computing; security of data; MobileGuardian framework; access control; data confidentiality; data integrity; data security; enterprise networks; enterprise security policy administration; mobile device security; mobile devices; personal mobile subscribers; security policy enforcement framework; security policy execution; security policy formulation; security policy testing; sensitive data isolation; smart phones; tablet computers; Access control; Business; Mobile communication; Smart phones; Testing; enforcement; formulation; isolation; mobile device; security policy; testing (ID#: 15-6020)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867564&isnumber=6867522
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Homomorphism, 2014

 

 
SoS Logo

Homomorphism

2014


Homomorphic encryption shows promise, but continues to demand a heavy processing load in practice. Research into homomorphism is focused on creating greater efficiencies, as well as elaborating on the underlying theory. The work cited here
was presented in 2014.



Ebrahimi, J.B.; Siavoshani, M.J., “Linear Index Coding via Graph Homomorphism,” Control, Decision and Information Technologies (CoDIT), 2014 International Conference on, vol., no., pp. 158, 163, 3-5 Nov. 2014. doi:10.1109/CoDIT.2014.6996886
Abstract: In [1], [2] it is shown that the minimum broadcast rate of a linear index code over a finite field Fq is equal to an algebraic invariant of the underlying digraph, called minrankq. In [3], it is proved that for F2 and any positive integer k, minrankq(G) ≤ k if and only if there exists a homomorphism from the complement of the graph G to the complement of a particular undirected graph family called “graph family {Gk}”.  As observed in [2], by combining these two results one can relate the linear index coding problem of undirected graphs to the graph homomorphism problem. In [4], a direct connection between linear index coding problem and graph homomorphism problem is introduced. In contrast to the former approach, the direct connection holds for digraphs as well and applies to any field size. More precisely, in [4], a graph family {Hkq} has been introduced and shown that whether or not the scalar linear index of a digraph G is less than or equal to k is equivalent to the existence of a graph homomorphism from the complement of G to the complement of Hkq. In this paper, we first study the structure of the digraphs Hkq defined in [4]. Analogous to the result of [2] about undirected graphs, we prove that Hkq are vertex transitive digraphs. Using this, and by applying a lemma of Hell and Nesetril [5], we derive a class of necessary conditions for digraphs G to satisfy lindq(G) ≤ k. Particularly, we obtain new lower bounds on lindq(G). Our next result is about the computational complexity of scalar linear index of a digraph. It is known that deciding whether the scalar linear index of an undirected graph is equal to k or not is NP-complete for k ≥ 3 and is polynomially decidable for k = 1, 2 [3]. For digraphs, it is shown in [6] that for the binary alphabet, the decision- problem for k = 2 is NP-complete. We use graph homomorphism framework to extend this result to arbitrary alphabet.
Keywords: computational complexity; directed graphs; encoding; NP-complete; algebraic invariant; computational complexity; graph homomorphism; linear index coding; minimum broadcast rate; scalar linear index; undirected graph; vertex transitive digraph; Color; Computational complexity; Educational institutions; Encoding; Indexes; Receivers; Vectors; Index coding; computational complexity of the minrank; graph homomorphism; linear index coding; minrank of a graph (ID#: 15-6021)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996886&isnumber=6996851

 

Ebrahimi, J.B.; Siavoshani, M.J., “On Index Coding and Graph Homomorphism,” Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 541, 545, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970890
Abstract: In this work, we study the problem of index coding from graph homomorphism perspective. We show that the minimum broadcast rate of an index coding problem for different variations of the problem such as non-linear, scalar, and vector index code, can be upper bounded by the minimum broadcast rate of another index coding problem when there exists a homomorphism from the complement of the side information graph of the first problem to that of the second problem. As a result, we show that several upper bounds on scalar and vector index code problem are special cases of one of our main theorems. For the linear scalar index coding problem, it has been shown in [1] that the binary linear index of a graph is equal to a graph theoretical parameter called minrank of the graph. For undirected graphs, in [2] it is shown that minrank(G) = k if and only if there exists a homomorphism from G to a predefined graph Gk. Combining these two results, it follows that for undirected graphs, all the digraphs with linear index of at most k coincide with the graphs G for which there exists a homomorphism from G to Gk. In this paper, we give a direct proof to this result that works for digraphs as well. We show how to use this classification result to generate lower bounds on scalar and vector index. In particular, we provide a lower bound for the scalar index of a digraph in terms of the chromatic number of its complement. Using our framework, we show that by changing the field size, linear index of a digraph can be at most increased by a factor that is independent from the number of the nodes.
Keywords: binary codes; directed graphs; graph theory; linear codes; nonlinear codes; binary linear index; chromatic number; digraphs; field size; graph homomorphism; graph theoretical parameter; linear scalar index coding problem; minimum broadcast rate; minrank; nonlinear codes; scalar codes; side information graph; undirected graphs; upper bound; vector index code problem; Educational institutions; Encoding; Indexes; Network coding; Receivers; Upper bound; Vectors (ID#: 15-6022)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970890&isnumber=6970773

 

Guozi Sun; Siqi Huang; Wan Bao; Yitao Yang; Zhiwei Wang, “A Privacy Protection Policy Combined with Privacy Homomorphism in the Internet of Things,” Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, vol., no., pp. 1, 6, 4-7 Aug. 2014. doi:10.1109/ICCCN.2014.6911856
Abstract: Recently, IOT (Internet of Things) develops very rapidly. However, the personal privacy protection is one of directly important factors that impact the large-scale applications of IOT. To solve this problem, this paper proposes a privacy protection policy based on privacy homomorphism. It can protect the security of personal information well by processing the needs of users without acquiring of plaintext. In another aspect, it also greatly improves the performance of the original multiplication homomorphism algorithm.
Keywords: Internet; Internet of Things; data privacy; multiplication homomorphism algorithm; personal information; personal privacy protection; plaintext; privacy homomorphism; privacy protection policy; Algorithm design and analysis; Encryption; Privacy; IOT; homomorphism; personal privacy; security (ID#: 15-6023)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911856&isnumber=6911704

 

Miao Yingkai; Chen Jia, “A Kind of Identity Authentication under Cloud Computing Environment,” Intelligent Computation Technology and Automation (ICICTA), 2014 7th International Conference on, vol., no., pp. 12, 15, 25-26 Oct. 2014. doi:10.1109/ICICTA.2014.10
Abstract: An identity authentication scheme is proposed combining with biometric encryption, public key cryptography of homomorphism and predicate encryption technology under the cloud computing environment. Identity authentication scheme is proposed based on the voice and homomorphism technology. The scheme is divided into four stages, register and training template stage, voice login and authentication stage, authorization stage, and audit stage. The results prove the scheme has certain advantages in four aspects.
Keywords: authorisation; cloud computing; public key cryptography; audit stage; authorization stage; biometric encryption; cloud computing environment; encryption technology; homomorphism technology; identity authentication scheme; public key cryptography; register and training template stage; voice login and authentication stage; voice technology; Authentication; Cloud computing; Encryption; Servers; Spectrogram; Training; homomorphism; identity authentication (ID#: 15-6024)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7003473&isnumber=7003462

 

Shu Qin Ren; Tany, B.H.M.; Sundaram, S.; Taining Wang; Khin Mi Mi Aung, “Homomorphic Exclusive-Or Operation Enhance Secure Searching on Cloud Storage,” Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, vol., no., vol., no., pp. 989, 994, 15-18 Dec. 2014. doi:10.1109/CloudCom.2014.86
Abstract: Enterprise cloud tenants would store their outsourced cloud data in encrypted form for data privacy and security. However, flexible data access functions such as data searching is usually sacrificed as a result. Thus, enterprise tenants demand secure data retrieval and computation solution from the cloud provider, which will allow them to utilize cloud services without the risks of leaking private data to outsiders and even service providers. In this paper, we propose an exclusive-or (XOR) homomorphism encryption scheme to support secure keyword searching on encrypted data. First, this scheme specifies a new data protection method by encrypting the data and randomizing it by performing XOR operation with a random bit-string. Second, this scheme can effectively protect data-in-transit against passive attack such as cipher text analysis due to the randomization. Third, this scheme is lightweight and only requires a symmetric encryption scheme and bitwise operations, which requires processing time in the order of milliseconds.
Keywords: cloud computing; cryptography; data protection; information retrieval; outsourcing; storage management; XOR homomorphism encryption scheme; bitwise operations; cloud services; computation solution; data access functions; data privacy; data retrieval; data security; data-in-transit protection; enterprise cloud tenants; homomorphic exclusive-or operation enhance secure searching; outsourced cloud data storage; passive attack; random bit-string; secure keyword searching; symmetric encryption scheme; Ciphers; Cloud computing; Electronic mail; Encryption; Servers; Silicon; Cloud storage; Secure searching; XOR-homomorphism encryption (ID#: 15-6025)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037795&isnumber=7036227

 

Wadhwa, D.; Dabas, P., “A Coherent Dynamic Remote Integrity Check on Cloud Data Utilizing Homomorphic Cryptosystem,” Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, vol., no., pp. 91, 96, 25-26 Sept. 2014. doi:10.1109/CONFLUENCE.2014.6949264
Abstract: Checking remote data integrity in climacteric cloud computing infrastructure is a valuable matter of concern. As the idea of cloud computing entered into a wide implementation today, data access becomes a major security issue. One of the various privacy concerns that are possibly taken into consideration relates to the maintenance of cloud data integrity. Directing users to check data integrity under public audibility is a task to be greatly considered. It is made through third party verifier who provides the client a proof whether the data placed on the server is altered or not. We proposed a dynamic data integrity checking mechanism in which the proof of correct data possession can be made from server on demand. Verifier, on behalf of the client can make a call to the server for verifying the correctness of the stored data, at anytime. This protocol is designed keeping the dynamic nature of cloud as the data placed on the server goes on changing very frequently. Thus, a dynamic data integrity approach is adopted here which includes the RSA encryption system as a method for public cryptosystem. A multiplicative homomorphic property, an idea towards integrity checking of cloud data, is implemented here. The beauty of applying and including the homomorphism property in our protocol is that, the proof can be generated by the third party verifier without having any clue of the original data. Our research is dually aimed at A) generating proof of correct data possession in a dynamic cloud environment B) providing high security of cloud data through homomorphic cryptosystem. The proposed technique is implemented in a very productive and cost effective manner. The testing results of the proposed work are propitious and favorable.
Keywords: cloud computing; cryptographic protocols; data integrity; public key cryptography; RSA encryption system; climacteric cloud computing infrastructure; cloud data; coherent dynamic remote integrity check; correct data possession proof; dynamic data integrity checking mechanism; homomorphic cryptosystem; multiplicative homomorphic property; proof generation; public cryptosystem; server on demand; stored data correctness verification; third party verifier; Ciphers; Cloud computing; Educational institutions; Protocols; Servers; RSA cryptosystem; data possesion; dynamic cloud data; homomorphism; integrity checking; verifier (ID#: 15-6026)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949264&isnumber=6949036

 

Gobel, A.; Goldberg, L.A.; McQuillan, C.; Richerby, D.; Yamakami, T., “Counting List Matrix Partitions of Graphs,” Computational Complexity (CCC), 2014 IEEE 29th Conference on, vol., no., pp. 56, 65, 11-13 June 2014. doi:10.1109/CCC.2014.14
Abstract: Given a symmetric DxD matrix M over {0, 1, *}, a list M-partition of a graph G is a partition of the vertices of G into D parts which are associated with the rows of M. The part of each vertex is chosen from a given list in such a way that no edge of G is mapped to a 0 in M and no non-edge of G is mapped to a 1 in M. Many important graph-theoretic structures can be represented as list M-partitions including graph colourings, split graphs and homogeneous sets, which arise in the proofs of the weak and strong perfect graph conjectures. Thus, there has been quite a bit of work on determining for which matrices M computations involving list M-partitions are tractable. This paper focuses on the problem of counting list M-partitions, given a graph G and given lists for each vertex of G. We give an algorithm that solves this problem in polynomial time for every (fixed) matrix M for which the problem is tractable. The algorithm relies on data structures such as sparse-dense partitions and sub cube decompositions to reduce each problem instance to a sequence of problem instances in which the lists have a certain useful structure that restricts access to portions of M in which the interactions of 0s and 1s is controlled. We show how to solve the resulting restricted instances by converting them into particular counting constraint satisfaction problems (#CSPs) which we show how to solve using a constraint satisfaction technique known as "arc-consistency". For every matrix M for which our algorithm fails, we show that the problem of counting list M-partitions is #P-complete. Furthermore, we give an explicit characterisation of the dichotomy theorem — counting list M-partitions is tractable (in FP) if and only if the matrix M has a structure called a derectangularising sequence. Finally, we show that the meta-problem of determining whether a given matrix has a derectangularising sequence is NP-complete.
Keywords: computational complexity; constraint satisfaction problems; data structures; directed graphs; graph colouring; matrix algebra; #P-complete problem; CSPs; NP-complete problem; arc-consistency; counting constraint satisfaction problems; counting list matrix partitions; data structures; derectangularising sequence; graph colourings; graph-theoretic structures; homogeneous sets; meta-problem; perfect graph conjectures; polynomial time; sparse-dense partitions; split graphs; subcube decompositions; undirected graph; Complexity theory; Data structures; Educational institutions; Europe; Partitioning algorithms; Standards; Symmetric matrices; counting complexity; dichotomy theorem; graph algorithms; graph homomorphism (ID#: 15-6027)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875475&isnumber=6875460

 

Thendral, G.; Valliyammai, C., “Dynamic Auditing and Updating Services in Cloud Storage,” Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, vol., no., pp. 1, 6, 10-12 April 2014. doi:10.1109/ICRTIT.2014.6996181
Abstract: Cloud is an innovative service platform. In this computing standard it delivers all the resources such as both hardware and software as a service over the Internet. Since the information are outsourced on the server of cloud and maintained at an anonymous place, there is the possibility of alteration or modification on the data because of any of the failures or because of the fraudulence of the mischievous server. To achieve the data integrity, there is a need of employing some of the data verification and auditing techniques. The proposed work is to perform the dynamic auditing for integrity verification and data dynamics in cloud storage with lower computation and communication cost, using techniques such as tagging, hash tag table and arbitrary sampling. It also supports timely anomaly detection and updates to outsourced data.
Keywords: cloud computing; data integrity; formal verification; Internet; anonymous place; auditing techniques; cloud storage; communication cost; computing standard; data dynamics; data integrity verification; dynamic auditing; innovative service platform; mischievous server; updating services; Cloud computing; Cryptography; Data models; Heuristic algorithms; Market research; Protocols; Servers; Audit service; Cloud Storage; Data Dynamics; Homomorphism  (ID#: 15-6028)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996181&isnumber=6996087

 

Legaux, J.; Loulergue, F.; Jubertie, S., “Development Effort and Performance Trade-off in High-Level Parallel Programming,” High Performance Computing & Simulation (HPCS), 2014 International Conference on, vol., no., pp. 162, 169, 21-25 July 2014. doi:10.1109/HPCSim.2014.6903682
Abstract: Research on high-level parallel programming approaches systematically evaluate the performance of applications written using these approaches and informally argue that high-level parallel programming languages or libraries increase the productivity of programmers. In this paper we present a methodology that allows to evaluate the trade-off between programming effort and performance of applications developed using different programming models. We apply this methodology on some implementations of a function solving the all nearest smaller values problem. The high-level implementation is based on a new version of the BSP homomorphism algorithmic skeleton.
Keywords: parallel programming; parallelising compilers; software performance evaluation; BSP homomorphism algorithmic skeleton; application performance trade-off; bulk synchronous parallelism; development effort; high-level parallel programming; programming models; Arrays; Libraries; Measurement; Parallel programming; Semantics; Skeleton; C++; algorithmic skeletons; software metrics (ID#: 15-6029)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903682&isnumber=6903651

 

Genest, D.; Legeay, M.; Loiseau, S.; Bechade, C., “A Graphical Language to Query Conceptual Graphs,” Tools with Artificial Intelligence (ICTAI), 2014 IEEE 26th International Conference on, vol., no., pp. 304, 308, 10-12 Nov. 2014. doi:10.1109/ICTAI.2014.53
Abstract: This paper presents a general query language for conceptual graphs. First, we introduce kernel query graphs. A kernel query graph can be used to express an "or" between two sub-graphs, or an "option" on an optional sub-graph. Second, we propose a way to express two kinds of queries (ask and select) using kernel query graphs. Third, the answers of queries are computed by an operation based on graph homomorphism: the projection from a kernel query graph.
Keywords: graph theory; knowledge representation; visual languages; graph homomorphism; graphical language; kernel query graphs; query conceptual graphs; sub-graphs; Database languages; Kernel; Niobium; Standards; Tin; Visualization; Vocabulary; Conceptual graph; Query language; SPARQL (ID#: 15-6030)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984489&isnumber=6983902

 

Aditya, S.K.; Premkumar, K.; Anitha, R.; Mukherjee, S., “Combined Security Framework for Multi-Cloud Environment,” Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for, vol., no., pp. 100, 105, 8-10 Dec. 2014. doi:10.1109/ICITST.2014.7038786
Abstract: Cloud computing is a field which has been rapidly developing over the past few years. The fact that cloud can offer both storage and computation at low rates makes it popular among corporations and IT industries. This also makes it a very attractive proposition for the future. But in spite of its promise and potential, security in the cloud proves to be a cause for concern to the business sector. This is due to the outsourcing of data onto third party managed cloud platform. These security concerns also make the use of cloud services less flexible. In this paper, we provide a secure framework that allows data to be stored securely in the cloud while at the same time allowing operations to be performed on it without any compromise of the sensitive parts of the data. A combination of searchable encryption with Partial Homomorphism is proposed. The strengths and practicality of the suggested solution have been tested experimentally and results are discussed.
Keywords: cloud computing; cryptography; IT industries; business sector; cloud computing; cloud services; combined security framework; data outsourcing; data storage; multicloud environment; partial homomorphism; searchable encryption; third party managed cloud platform; Ciphers; Cloud computing; Databases; Encryption; Cloud Computing; Cloud Security; Homomorphic Encryption; Searchable Encryption; Secure Socket Layer (ID#: 15-6031)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7038786&isnumber=7038754

 

Yuan Tian; Al-Rodhaan, M.; Biao Song; Al-Dhelaan, A.; Ting Huai Ma, “Somewhat Homomorphic Cryptography for Matrix Multiplication Using GPU Acceleration,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 166, 170, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013115
Abstract: Homomorphic encryption has become a popular research topic since the cloud computing paradigm emerged. This paper discusses the design of a GPU-assisted homomorphic cryptograph for matrix operation. Our proposed scheme is based on an n*n matrix multiplication which are computationally homomorphic. We use more efficient GPU programming scheme with the extension of DGHV homomorphism, which prove the result of verification does not leak any information about the inputs or the output during the encryption and decryption. The performance results are obtained from the executions on a machine equipped with a GeForce GTX 765M GPU. We use three basic parallel algorithms to form efficient solutions which accelerate the speed of encryption and evaluation. Although fully homomorphic encryption is still not practical for real world applications in current stage, this work shows the possibility to improve the performance of homomorphic encryption and achieve this target one step closer.
Keywords: cryptography; graphics processing units; matrix multiplication; parallel algorithms; DGHV homomorphism; GPU acceleration; GPU programming; GeForce GTX 765M GPU; decryption; homomorphic cryptography; Acceleration; Educational institutions; Encryption; Graphics processing units; Public key; Cloud; Cryptography; GPU; Homomorphic encryption; Matrix multiplication; Privacy; Security (ID#: 15-6032)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013115&isnumber=7013076

 

Soonhwa Sung, “Confidential Aggregation for Wireless Transmissions,” Information Networking (ICOIN), 2014 International Conference on, vol., no., pp. 390, 394, 10-12 Feb. 2014. doi:10.1109/ICOIN.2014.6799711
Abstract: Wireless sensor network would like to act a secure data aggregation in a cluster. In the previous scheme, there was no efficient data aggregation considering encrypted-data aggregation in multi-layer cluster environment. More specifically, the scheme doesn't provide multi-layer cluster environment due to one-hop to the base station and it pre-installs keys for verification and data aggregation in the Cluster Head before deployment, so it limits the flexibility of system deployment and aggregation. Besides, it doesn't support dynamic key management to bring more flexibility in data aggregation. Therefore, this paper proposes data confidentiality for wireless sensor transmission with three layers. Three layers which are composed of public, Sensor Key Translation, and confidential layers operate to solve these problems. The paper extends privacy homomorphism functions to support dynamic data aggregation and the sensors can be moved to another cluster using three layers.
Keywords: cryptography; pattern clustering; telecommunication network management; wireless sensor networks; base station; cluster head; confidential aggregation; confidential layers; data confidentiality; dynamic data aggregation; dynamic key management; encrypted-data aggregation; multilayer cluster environment; privacy homomorphism functions; public layers; secure data aggregation; sensor key translation layers; wireless sensor network; wireless transmissions; Communication system security; Data privacy; Encryption; Wireless communication; Wireless sensor networks; Cluster Head(CH); confidential aggregation; homomorphic encryption; symmetric homomorphic scheme; three-layer cluster interaction (ID#: 15-6033)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799711&isnumber=6799467

 

Yingming Zhao; Yue Pan; Sanchao Wang; Junxing Zhang, “An Anonymous Voting System Based on Homomorphic Encryption,” Communications (COMM), 2014 10th International Conference on, vol., no., pp. 1, 4, 29-31 May 2014. doi:10.1109/ICComm.2014.6866682
Abstract: In this paper we present an electronic voting system based on Homomorphic encryption to ensure anonymity, privacy, and reliability in the voting. Homomorphic encryption is a subset of privacy homomorphism. It is capable of computing the encrypted data directly and also encrypting the result of the operation automatically. For this reason it has a wide range of applications including secure multi-party computation, database encryption, electronic voting, etc. In this paper, we make use of the homomorphic encryption mechanism to design and implement an electronic voting system that supports the separation of privileges among voters, tellers, and announcers. Our experimental results show the system not only ensures anonymity in voting but also presents cheating during the counting process.
Keywords: algebra; cryptography; data privacy; database management systems; government data processing; algebraic operations; anonymity; anonymous voting system; ciphertext; counting process; database encryption; electronic voting system; homomorphic encryption mechanism; privacy homomorphism; reliability; secure multiparty computation; Additives; Electronic voting; Electronic voting systems; Encryption; Privacy; homomorphic encryption (ID#: 15-6034)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866682&isnumber=6866648

 

Drake, R.; Pu, K.Q., “Using Document Space for Relational Search,” Information Reuse and Integration (IRI), 2014 IEEE 15th International Conference on, vol., no., pp. 841, 844, 13-15 Aug. 2014. doi:10.1109/IRI.2014.7051977
Abstract: In this paper, we present a family of methods and algorithms to efficiently integrate text indexing and keyword search from information retrieval to support search in relational databases. We propose a bi-directional transformation that maps relational database instances to document collections. The transformation is shown to be a homomorphism of keyword search. Thus, any search of tuple networks by a keyword query can be efficiently executed as a search for documents, and vice versa. By this construction, we demonstrate that indexing and search technologies developed for documents can naturally be reduced and integrated into relational database systems.
Keywords: indexing; query processing; relational databases; text analysis; bidirectional transformation; document collections; document space; information retrieval; keyword query; keyword search homomorphism; relational database systems; relational search; text indexing; tuple networks; Couplings; Encoding; Indexing; Keyword search; Relational databases; Search problems (ID#: 15-6035)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7051977&isnumber=7051718

 

Chou, R.A.; Bloch, M.R.; Kliewer, J., “Low-Complexity Channel Resolvability Codes for the Symmetric Multiple-Access Channel,” Information Theory Workshop (ITW), 2014 IEEE, vol., no., pp. 466, 470, 2-5 Nov. 2014. doi:10.1109/ITW.2014.6970875
Abstract: We investigate channel resolvability for the l-user multiple-access channel (MAC) with two different families of encoders. The first family consists of invertible extractors, while the second one consists of injective group homomorphisms, and was introduced by Hayashi for the point-to-point channel resolvability. The main benefit of these two families is to provide explicit low-complexity channel resolvability codes in the case of symmetric MACs. Specifically, we provide two examples of families of invertible extractors suitable for MAC resolvability with uniform input distributions, one based on finite-field multiplication, which can be implemented in O(n log n) for a limited range of values of the encoding blocklength n, and a second based on modified Toeplitz matrices, which can be implemented in O(n log n) for a wider range of values of n. We also provide an example of family of injective group homomorphisms based on finite-field multiplication suitable for MAC resolvability with uniform input distributions, which can be implemented in O(n log n) for some values of n.
Keywords: Toeplitz matrices; channel coding; multi-access systems; MAC resolvability; finite-field multiplication; injective group homomorphisms; invertible extractors; low-complexity channel resolvability codes; point-to-point channel resolvability; symmetric multiple access channel; Computers; Electronic mail; Encoding; Random variables; Silicon; Vectors; Zinc (ID#: 15-6036)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970875&isnumber=6970773
 

 

Feng Zhao; Chao Li; Chun Feng Liu, “A Cloud Computing Security Solution Based on Fully Homomorphic Encryption,” Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp. 485, 488, 16-19 Feb. 2014. doi:10.1109/ICACT.2014.6779008
Abstract: With the rapid development of Cloud computing, more and more users deposit their data and application on the cloud. But the development of Cloud computing is hindered by many Cloud security problem. Cloud computing has many characteristics, e.g. multi-user, virtualization, scalability and so on. Because of these new characteristics, traditional security technologies can't make Cloud computing fully safe. Therefore, Cloud computing security becomes the current research focus and is also this paper's research direction[1]. In order to solve the problem of data security in cloud computing system, by introducing fully homomorphism encryption algorithm in the cloud computing data security, a new kind of data security solution to the insecurity of the cloud computing is proposed and the scenarios of this application is hereafter constructed. This new security solution is fully fit for the processing and retrieval of the encrypted data, and effectively leading to the broad applicable prospect, the security of data transmission and the storage of the cloud computing[2].
Keywords: cloud computing; cryptography; cloud computing security solution; cloud security problem; data security solution; data storage; data transmission; encrypted data processing; encrypted data retrieval; fully homomorphic encryption algorithm; security technologies; Cloud computing; Encryption; Safety; Cloud security; Cloud service; Distributed implementation; Fully homomorphic encryption (ID#: 15-6037)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779008&isnumber=6778899 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


I/O Systems Security, 2014

 

 
SoS Logo

I/O Systems Security

2014


Management of I/O devices is a critical part of the operating system. Entire I/O subsystems are devoted to its operation. These subsystems contend both with the movement towards standard interfaces for a wide range of devices to makes it easier to add newly developed devices to existing systems, and the development of entirely new types of devices for which existing standard interfaces can be difficult to apply. Typically, when accessing files, a security check is performed when the file is created or opened. The security check is typically not done again unless the file is closed and reopened. If an opened file is passed to an untrusted caller, the security system can, but is not required to prevent the caller from accessing the file. Research into I/O security addresses the need to provide adequate security economically and to scale. Research works cited here were published or presented in 2014. 



Yeongjin Jang, Chengyu Song, Simon P. Chung, Tielei Wang, Wenke Lee; “A11y Attacks: Exploiting Accessibility in Operating Systems,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 103-115. doi:10.1145/2660267.2660295
Abstract: Driven in part by federal law, accessibility (a11y) support for disabled users is becoming ubiquitous in commodity OSs. Some assistive technologies such as natural language user interfaces in mobile devices are welcomed by the general user population. Unfortunately, adding new features in modern, complex OSs usually introduces new security vulnerabilities. Accessibility support is no exception. Assistive technologies can be defined as computing subsystems that either transform user input into interaction requests for other applications and the underlying OS, or transform application and OS output for display on alternative devices. Inadequate security checks on these new I/O paths make it possible to launch attacks from accessibility interfaces. In this paper, we present the first security evaluation of accessibility support for four of the most popular computing platforms: Microsoft Windows, Ubuntu Linux, iOS, and Android. We identify twelve attacks that can bypass state-of-the-art defense mechanisms deployed on these OSs, including UAC, the Yama security module, the iOS sandbox, and the Android sandbox. Further analysis of the identified vulnerabilities shows that their root cause is that the design and implementation of accessibility support involves inevitable trade-offs among compatibility, usability, security, and (economic) cost. These trade-offs make it difficult to secure a system against misuse of accessibility support. Based on our findings, we propose a number of recommendations to either make the implementation of all necessary security checks easier and more intuitive, or to alleviate the impact of missing/incorrect checks. We also point out open problems and challenges in automatically analyzing accessibility support and identifying security vulnerabilities.
Keywords: accessibility, assistive technology, attacks (ID#: 15- 6633)
URL:  http://doi.acm.org/10.1145/2660267.2660295


Lisa J. K. Durbeck, Peter M. Athanas, Nicholas J. Macias; “Secure-by-Construction Composable Componentry for Network Processing,” HotSoS ’14, Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 27. doi:10.1145/2600176.2600203
Abstract: Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data. This work presents a secure-by-construction approach  for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing.  This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives.  We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).
Keywords: 100 Gbps, embedded hardware, hardware-software co-design, line-speed processor, network processor, secure-by-construction, stream processing (ID#: 15-6634)
URL:  http://doi.acm.org/10.1145/2600176.2600203


S. T. Choden Konigsmark, Leslie K. Hwang, Deming Chen, Martin D. F. Wong; “System-of-PUFs: Multilevel Security for Embedded Systems,” CODES ’14, Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 27. doi:10.1145/2656075.2656099
Abstract: Embedded systems continue to provide the core for a wide range of applications, from smart-cards for mobile payment to smart-meters for power-grids. The resource and power dependency of embedded systems continues to be a challenge for state-of-the-art security practices. Moreover, even theoretically secure algorithms are often vulnerable in their implementation. With decreasing cost and complexity, physical attacks are an increasingly important threat. This threat led to the development of Physically Unclonable Functions (PUFs) which are disordered physical systems with various applications in hardware security. However, consistent security oriented design of embedded systems remains a challenge, as most formalizations and security models are concerned with isolated physical components or high-level concept. We provide four unique contributions: (i) We propose a system-level security model to overcome the chasm between secure components and requirements of high-level protocols; this enables synergy between component-oriented security formalizations and theoretically proven protocols. (ii) An analysis of current practices in PUF protocols using the proposed system-level security model; we identify significant issues and expose assumptions that require costly security techniques. (iii) A System-of-PUF (SoP) that utilizes the large PUF design-space to achieve security requirements with minimal resource utilization; SoP requires 64% less gate-equivalent units than recently published schemes. (iv) A multilevel authentication protocol based on SoP which is validated using our system-level security model and which overcomes current vulnerabilities. Furthermore, this protocol offers breach recognition and recovery.
Keywords: hardware authentication, physically unclonable functions (ID#: 15-6635)
URL:  http://doi.acm.org/10.1145/2656075.2656099


Ahmed M. Azab, Peng Ning, Jitesh Shah, Quan Chen, Rohan Bhutkar, Guruprasad Ganesh, Jia Ma, Wenbo Shen; “Hypervision Across Worlds: Real-time Kernel Protection from the ARM TrustZone Secure World,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 90-102. doi:10.1145/2660267.2660350
Abstract: TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g., through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.
Keywords: arm trustzone, integrity monitoring, kernel protection (ID#: 15-6636)
URL:  http://doi.acm.org/10.1145/2660267.2660350


Tongxin Li, Xiaoyong Zhou, Luyi Xing, Yeonjoon Lee, Muhammad Naveed, XiaoFeng Wang, Xinhui Han; “Mayhem in the Push Clouds: Understanding and Mitigating Security Hazards in Mobile Push-Messaging Services,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 978-989. doi:10.1145/2660267.2660302
Abstract: Push messaging is among the most important mobile-cloud services, offering critical supports to a wide spectrum of mobile apps. This service needs to coordinate complicated interactions between developer servers and their apps in a large scale, making it error prone. With its importance, little has been done, however, to understand the security risks of the service. In this paper, we report the first security analysis on those push-messaging services, which reveals the pervasiveness of subtle yet significant security flaws in them, affecting billions of mobile users. Through even the most reputable services like Google Cloud Messaging (GCM) and Amazon Device Messaging (ADM), the adversary running carefully-crafted exploits can steal sensitive messages from a target device, stealthily install or uninstall any apps on it, remotely lock out its legitimate user or even completely wipe out her data. This is made possible by the vulnerabilities in those services' protection of device-to-cloud interactions and the communication between their clients and subscriber apps on the same devices. Our study further brings to light questionable practices in those services, including weak cloud-side access control and extensive use of PendingIntent, as well as the impacts of the problems, which cause popular apps or system services like Android Device Manager, Facebook, Google+, Skype, PayPal etc. to leak out sensitive user data or unwittingly act on the adversary's command. To mitigate this threat, we developed a technique that helps the app developers establish end-to-end protection of the communication with their apps, over the vulnerable messaging services they use.
Keywords: android security, end-to-end protection, mobile cloud security, mobile push-messaging services, security analysis (ID#: 15-6637)
URL: http://doi.acm.org/10.1145/2660267.2660302


Musard Balliu, Mads Dam, Roberto Guanciale; “Automating Information Flow Analysis of Low Level Code,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1080-1091. doi:10.1145/2660267.2660322
Abstract: Low level code is challenging: It lacks structure, it uses jumps and symbolic addresses, the control flow is often highly optimized, and registers and memory locations may be reused in ways that make typing extremely challenging. Information flow properties create additional complications: They are hyperproperties relating multiple executions, and the possibility of interrupts and concurrency, and use of devices and features like memory-mapped I/O requires a departure from the usual initial-state final-state account of noninterference. In this work we propose a novel approach to relational verification for machine code. Verification goals are expressed as equivalence of traces decorated with observation points. Relational verification conditions are propagated between observation points using symbolic execution, and discharged using first-order reasoning. We have implemented an automated tool that integrates with SMT solvers to automate the verification task. The tool transforms ARMv7 binaries into an intermediate, architecture-independent format using the BAP toolset by means of a verified translator. We demonstrate the capabilities of the tool on a separation kernel system call handler, which mixes hand-written assembly with gcc-optimized output, a UART device driver and a crypto service modular exponentiation routine.
Keywords: formal verification, information flow security, machine code, symbolic execution (ID#: 15-6638)
URL: http://doi.acm.org/10.1145/2660267.2660322


Frederico Araujo, Kevin W. Hamlen, Sebastian Biedermann, Stefan Katzenbeisser; “From Patches to Honey-Patches: Lightweight Attacker Misdirection, Deception, and Disinformation,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 942-953. doi:10.1145/2660267.2660329
Abstract: Traditional software security patches often have the unfortunate side-effect of quickly alerting attackers that their attempts to exploit patched vulnerabilities have failed. Attackers greatly benefit from this information; it expedites their search for unpatched vulnerabilities, it allows them to reserve their ultimate attack payloads for successful attacks, and it increases attacker confidence in stolen secrets or expected sabotage resulting from attacks. To overcome this disadvantage, a methodology is proposed for reformulating a broad class of security patches into honey-patches — patches that offer equivalent security but that frustrate attackers' ability to determine whether their attacks have succeeded or failed. When an exploit attempt is detected, the honey-patch transparently and efficiently redirects the attacker to an unpatched decoy, where the attack is allowed to succeed. The decoy may host aggressive software monitors that collect important attack information, and deceptive files that disinform attackers. An implementation for three production-level web servers, including Apache HTTP, demonstrates that honey-patching can be realized for large-scale, performance-critical software applications with minimal overheads.
Keywords: honeypots, intrusion detection and prevention (ID#: 15-6639)
URL:  http://doi.acm.org/10.1145/2660267.2660329


Shijun Zhao, Qianying Zhang, Guangyao Hu, Yu Qin, Dengguo Feng; “Providing Root of Trust for ARM TrustZone using On-Chip SRAM,” TrustED ’14, Proceedings of the 4th International Workshop on Trustworthy Embedded Devices, November 2014, Pages 25-36. doi:10.1145/2666141.2666145
Abstract: We present the design, implementation and evaluation of the root of trust for the Trusted Execution Environment (TEE) provided by ARM TrustZone based on the on-chip SRAM Physical Unclonable Functions (PUFs). We first implement a building block which provides the foundations for the root of trust: secure key storage and truly random source. The building block doesn't require on or off-chip secure non-volatile memory to store secrets, but provides a high-level security: resistance to physical attackers capable of controlling all external interfaces of the system on chip (SoC). Based on the building block, we build the root of trust consisting of seal/unseal primitives for secure services running in the TEE, and a software-only TPM service running in the TEE which provides rich TPM functionalities for the rich OS running in the normal world of TrustZone. The root of trust resists software attackers capable of compromising the entire rich OS. Besides, both the building block and the root of trust run on the powerful ARM processor. In one word, we leverage the on-chip SRAM, commonly available on mobile devices, to achieve a low-cost, secure, and efficient design of the root of trust.
Keywords: on-chip sram, root of trust, tpm service, trusted execution environment, trustzone (ID#: 15-6640)
URL:  http://doi.acm.org/10.1145/2666141.2666145


Richard Joiner, Thomas Reps, Somesh Jha, Mohan Dhawan, Vinod Ganapathy; “Efficient Runtime-Enforcement Techniques for Policy Weaving,” FSE 2014, Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 224-234. doi: 10.1145/2635868.2635907
Abstract: Policy weaving is a program-transformation technique that rewrites a program so that it is guaranteed to be safe with respect to a stateful security policy. It utilizes (i) static analysis to identify points in the program at which policy violations might occur, and (ii) runtime checks inserted at such points to monitor policy state and prevent violations from occurring. The promise of policy weaving stems from the possibility of blending the best aspects of static and dynamic analysis components. Therefore, a successful instantiation of policy weaving requires a careful balance and coordination between the two. In this paper, we examine the strategy of using a combination of transactional introspection and statement indirection to implement runtime enforcement in a policy-weaving system. Transactional introspection allows the state resulting from the execution of a statement to be examined and, if the policy would be violated, suppressed. Statement indirection serves as a light-weight runtime analysis that can recognize and instrument dynamically generated code that is not available to the static analysis. These techniques can be implemented via static rewriting so that all possible program executions are protected against policy violations. We describe our implementation of transactional introspection and statement indirection for policy weaving, and report experimental results that show the viability of the approach in the context of real-world JavaScript programs executing in a browser.
Keywords: Security policy enforcement, dynamic runtime verification, speculative execution, statement indirection, transactional introspection (ID#: 15-6641)
URL: http://doi.acm.org/10.1145/2635868.2635907


Hai Van Pham, Philip Moore, Khang Dinh Tran; “Context Matching with Reasoning and Decision Support using Hedge Algebra with Kansei Evaluation,” SoICT ’14, Proceedings of the Fifth Symposium on Information and Communication Technology, December 2014, Pages 202-210. doi:10.1145/2676585.2676598
Abstract: There have been far reaching Societal and Geo-Political developments in healthcare domains locally, nationally, and globally. Healthcare systems are essentially patient centric and decision driven with the clinician focus being on the identification of the best treatment options for patients in uncertain environments. Decision-support systems must focus on knowledge-based decisions using both tacit and explicit knowledge. Decisions are generally made using a qualitative approach in which linguistic (semantic) terms are used to express parameters and preferences to determine the optimal decision from a range of alternative decisions. The study presented in this paper proposes an approach which implements context-matching using hedge algebra integrated with Kansei evaluation. The proposed approach is designed to enable quantification of qualitative factors for linguistic variables while accommodating decision-makers preferences and sensibilities (constraint satisfaction) in decision-making. Experimental results demonstrate that our proposed approach achieves a significant improvement in the performance accuracy. In this paper our proposed approach uses the healthcare domain as a use-case however we argue that the posited approach will potentially generalize to other domains and systems where knowledge-based decision support is a principal requirement.
Keywords: kansei engineering, context, context-matching, decision-support, hedge algebra, personalization, uncertainty (ID#: 15-6642)
URL:  http://doi.acm.org/10.1145/2676585.2676598


Yossi Azar, Seny Kamara, Ishai Menache, Mariana Raykova, Bruce Shepard; “Co-Location-Resistant Clouds,” CCSW ’14, Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, November 2014, Pages 9-20. doi:10.1145/2664168.2664179
Abstract: We consider the problem of designing multi-tenant public infrastructure clouds resistant to cross-VM attacks without relying on single-tenancy or on assumptions about the cloud's servers. In a cross-VM attack (which have been demonstrated recently in Amazon EC2) an adversary launches malicious virtual machines (VM) that perform side-channel attacks against co-located VMs in order to recover their contents. We propose a formal model in which to design and analyze secure VM placement algorithms, which are online vector bin packing algorithms that simultaneously satisfy certain optimization constraints and notions of security. We introduce and formalize several notions of security, establishing formal connections between them. We also introduce a new notion of efficiency for online bin packing algorithms that better captures their cost in the setting of cloud computing. Finally, we propose a secure placement algorithm that achieves our strong notions of security when used with a new cryptographic mechanism we refer to as a shared deployment scheme.
Keywords: bin packing, cloud computing, co-location attacks, co-location resistance, cross-vm attacks, cryptography, isolation (ID#: 15-6643)
URL:  http://doi.acm.org/10.1145/2664168.2664179


Kim Ramchen, Brent Waters; “Fully Secure and Fast Signing from Obfuscation,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 659-673. doi:10.1145/2660267.2660306
Abstract: In this work we explore new techniques for building short signatures from obfuscation. Our goals are twofold. First, we would like to achieve short signatures with adaptive security proofs. Second, we would like to build signatures with fast signing, ideally significantly faster than comparable signatures that are not based on obfuscation. The goal here is to create an “imbalanced” scheme where signing is fast at the expense of slower verification. We develop new methods for achieving short and fully secure obfuscation-derived signatures. Our base signature scheme is built from punctured programming and makes a novel use of the “prefix technique” to guess a signature. We find that our initial scheme has slower performance than comparable algorithms (e.g. EC-DSA). We find that the underlying reason is that the underlying PRG is called ~l2 times for security parameter l. To address this issue we construct a more efficient scheme by adapting the Goldreich-Goldwasser-Micali [16] construction to form the basis for a new puncturable PRF. This puncturable PRF accepts variable-length inputs and has the property that evaluations on all prefixes of a message can be efficiently pipelined. Calls to the puncturable PRF by the signing algorithm therefore make fewer invocations of the underlying PRG, resulting in reduced signing costs. We evaluate our puncturable PRF based signature schemes using a variety of cryptographic candidates for the underlying PRG. We show that the resulting performance on message signing is competitive with that of widely deployed signature schemes.
Keywords: adaptive security, digital signature scheme, obfuscation, punctured programming (ID#: 15-6644)
URL: http://doi.acm.org/10.1145/2660267.2660306


Florian Hahn, Florian Kerschbaum; “Searchable Encryption with Secure and Efficient Updates,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 310-320. doi:10.1145/2660267.2660297
Abstract: Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient — i.e. have sub-linear search time —, dynamic — i.e. allow updates — and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
Keywords: dynamic searchable encryption, searchable encryption, secure index, update (ID#: 15-6645)
URL: http://doi.acm.org/10.1145/2660267.2660297


Chongxi Bao, Ankur Srivastava; “A Secure Algorithm for Task Scheduling against Side-channel Attacks,” TrustED ’14, Proceedings of the 4th International Workshop on Trustworthy Embedded Devices, November 2014, Pages 3-12. doi:10.1145/2666141.2666142
Abstract: The problem of ordering task executions has been well studied under power, performance, and thermal constraints. However, it has been pursued less under security concerns. We have observed that different orders of task executions have different side-channel information leakage, thus having different security levels. In this paper, we first model the behavior of the attacker and then propose a secure algorithm for ordering a periodic tasks that have soft deadlines. Our algorithm can keep a good balance between side-channel information leakage and total lateness. Experimental results show that the attacker could make 38.65% more error inferring the state of chip through side-channel analysis if tasks are scheduled using our algorithm as compared to using algorithms without security consideration (like EDF algorithm).
Keywords: embedded systems, hardware security, side-channel attacks, task scheduling (ID#: 15-6646)
URL: http://doi.acm.org/10.1145/2666141.2666142


Joshua Cazalas, J. Todd McDonald, Todd R. Andel, Natalia Stakhanova; “Probing the Limits of Virtualized Software Protection,” PPREW-4, Proceedings of the 4th Program Protection and Reverse Engineering Workshop, December 2014, Article No. 5. doi:10.1145/2689702.2689707
Abstract: Virtualization is becoming a prominent field of research not only in distributed systems, but also in software protection and obfuscation. Software virtualization has given rise to advanced techniques that may provide intellectual property protection and anti-cloning resilience. We present results of an empirical study that answers whether integrity of execution can be preserved for process-level virtualization protection schemes in the face of adversarial analysis. Our particular approach considers exploits that target the virtual execution environment itself and how it interacts with the underlying host operating system and hardware. We give initial results that indicate such protection mechanisms may be vulnerable at the level where the virtualized code interacts with the underlying operating system. The resolution of whether such attacks can undermine security will help create better detection and analysis methods for malware that also employ software virtualization. Our findings help frame research for additional mitigation techniques using hardware-based integration or hybrid virtualization techniques that can better defend legitimate uses of virtualized software protection.
Keywords: Software protection, obfuscation,  process-level virtualization, tamper resistance, virtualized code (ID#: 15-6647)
URL:  http://doi.acm.org/10.1145/2689702.2689707


Anil Kurmus, Robby Zippel; “A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1366-1377. doi:10.1145/2660267.2660331
Abstract: Software security practitioners are often torn between choosing performance or security. In particular, OS kernels are sensitive to the smallest performance regressions. This makes it difficult to develop innovative kernel hardening mechanisms: they may inevitably incur some run-time performance overhead. Here, we propose building each kernel function with and without hardening, within a single split kernel. In particular, this allows trusted processes to be run under unmodified kernel code, while system calls of untrusted processes are directed to the hardened kernel code. We show such trusted processes run with no overhead when compared to an unmodified kernel. This allows deferring the decision of making use of hardening to the run-time. This means kernel distributors, system administrators and users can selectively enable hardening according to their needs: we give examples of such cases. Although this approach cannot be directly applied to arbitrary kernel hardening mechanisms, we show cases where it can. Finally, our implementation in the Linux kernel requires few changes to the kernel sources and no application source changes. Thus, it is both maintainable and easy to use.
Keywords: build system, kernel hardening, os security, performance (ID#: 15-6648)
URL:  http://doi.acm.org/10.1145/2660267.2660331


Tamas K. Lengyel, Steve Maresca, Bryan D. Payne, George D. Webster, Sebastian Vogl, Aggelos Kiayias; “Scalability, Fidelity and Stealth in the DRAKVUF Dynamic Malware Analysis System,” ACSAC ’14, Proceedings of the 30th Annual Computer Security Applications Conference. December, 2014, Pages 386-395. doi:10.1145/2664243.2664252
Abstract: Malware is one of the biggest security threats on the Internet today and deploying effective defensive solutions requires the rapid analysis of a continuously increasing number of malware samples. With the proliferation of metamorphic malware the analysis is further complicated as the efficacy of signature-based static analysis systems is greatly reduced. While dynamic malware analysis is an effective alternative, the approach faces significant challenges as the ever increasing number of samples requiring analysis places a burden on hardware resources. At the same time modern malware can both detect the monitoring environment and hide in unmonitored corners of the system.  In this paper we present DRAKVUF, a novel dynamic malware analysis system designed to address these challenges by building on the latest hardware virtualization extensions and the Xen hypervisor. We present a technique for improving stealth by initiating the execution of malware samples without leaving any trace in the analysis machine. We also present novel techniques to eliminate blind-spots created by kernel-mode rootkits by extending the scope of monitoring to include kernel internal functions, and to monitor file-system accesses through the kernel's heap allocations. With extensive tests performed on recent malware samples we show that DRAKVUF achieves significant improvements in conserving hardware resources while providing a stealthy, in-depth view into the behavior of modern malware.
Keywords: dynamic malware analysis, virtual machine introspection (ID#: 15-6649)
URLhttp://doi.acm.org/10.1145/2664243.2664252


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Information Flow Analysis and Security, 2014

 

 
SoS Logo

Information Flow Analysis and Security

2014


One key to computer security is the notion of information flow.  It occurs either explicitly or implicitly in a system. The works cited here cover a range of issues and approaches.  All were presented in 2014.



Vance, A., “Flow Based Analysis of Advanced Persistent Threats Detecting Targeted Attacks in Cloud Computing,” Infocommunications Science and Technology, 2014 First International Scientific-Practical Conference Problems of, vol. no.,
pp. 173,176, 14-17 Oct. 2014. doi:10.1109/INFOCOMMST.2014.6992342
Abstract: Cloud computing provides industry, government, and academic users’ convenient and cost-effective access to distributed services and shared data via the Internet. Due to its distribution of diverse users and aggregation of immense data, cloud computing has increasingly been the focus of targeted attacks. Meta-analysis of industry studies and retrospective research involving cloud service providers reveal that cloud computing is demonstrably vulnerable to a particular type of targeted attack, Advanced Persistent Threats (APTs). APTs have proven to be difficult to detect and defend against in cloud based infocommunication systems. The prevalent use of polymorphic malware and encrypted covert communication channels make it difficult for existing packet inspecting and signature based security technologies such as; firewalls, intrusion detection sensors, and anti-virus systems to detect APTs. In this paper, we examine the application of an alternative security approach which applies an algorithm derived from flow based monitoring to successfully detect APTs. Results indicate that statistical modeling of APT communications can successfully develop deterministic characteristics for detection is a more effective and efficient way to protect against APTs.
Keywords: cloud computing; security of data; statistical analysis; APT; Internet; advanced persistent threats; cloud based infocommunication systems; flow based analysis; flow based monitoring; packet inspection; signature based security technologies; statistical modeling; targeted attack detection; Cloud computing; Computer security; Logic gates; Telecommunication traffic; Vectors; Advanced Persistent Threats; Cloud Computing; Cyber Security; Flow Based Analysis; Threat Detection (ID#: 15-6650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992342&isnumber=6992271


Lokhande, B.; Dhavale, S., “Overview of Information Flow Tracking Techniques Based on Taint Analysis for Android,” Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, vol. no., pp. 749, 753, 5-7 March 2014. doi:10.1109/IndiaCom.2014.6828062
Abstract: Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.
Keywords: Android (operating system); data flow analysis; smart phones; Android; Information leakage instances; data flow analysis technique; dynamic analysis; dynamic taint analysis approaches; exponential smartphone market growth; information flow tracking techniques; information leakage detection techniques; program code; program-run; static analysis techniques; static taint analysis approaches; Androids; Humanoid robots; Operating systems; Privacy; Real-time systems; Security; Smart phones; Android Operating System; Mobile Security; static and dynamic taint analysis (ID#: 15-6651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828062&isnumber=6827395


Zhifei Chen; Lin Chen; Baowen Xu, “Hybrid Information Flow Analysis for Python Bytecode,” Web Information System and Application Conference (WISA), 2014 11th, vol. no., pp. 95, 100, 12-14 Sept. 2014. doi:10.1109/WISA.2014.26
Abstract: Python is widely used to create and manage complex, database-driven websites. However, due to dynamic features such as dynamic typing of variables, Python programs pose a serious security risk to web applications. Most security vulnerabilities result from the fact that unsafe data input reaches security-sensitive operations. To address this problem, information flow analysis for Python programs is proposed to enforce this property. Information flow can capture the fact that a particular value affects another value in the program. In this paper, we present a novel approach for analyzing information flow in Python byte code which is a low-level language and is more widely broadcast. Our approach performs a hybrid of static and dynamic control/data flow analysis. Static analysis is used to study implicit flow, while dynamic analysis efficiently tracks execution information and determines definition-use pair. To the best of our knowledge, it is the first one for Python byte code.
Keywords: authoring languages; data flow analysis; security of data; Python bytecode; Python programs; dynamic analysis; hybrid information flow analysis; low-level language; security risk; static analysis; Buildings; Educational institutions; Loading; Performance analysis; Runtime; Security; Upper bound; Python; information flow; security vulnerabilities; web applications (ID#: 15-6652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057995&isnumber=7057968


Haddadi, F.; Morgan, J.; Filho, E.G.; Zincir-Heywood, A.N., “Botnet Behaviour Analysis Using IP Flows: With HTTP Filters Using Classifiers,” Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, vol. no., pp. 7,12, 13-16 May 2014. doi:10.1109/WAINA.2014.19
Abstract: Botnets are one of the most destructive threats against the cyber security. Recently, HTTP protocol is frequently utilized by botnets as the Command and Communication (C&C) protocol. In this work, we aim to detect HTTP based botnet activity based on botnet behaviour analysis via machine learning approach. To achieve this, we employ flow-based network traffic utilizing NetFlow (via Softflowd). The proposed botnet analysis system is implemented by employing two different machine learning algorithms, C4.5 and Naive Bayes. Our results show that C4.5 learning algorithm based classifier obtained very promising performance on detecting HTTP based botnet activity.
Keywords: Bayes methods; IP networks; computer network security; hypermedia; learning (artificial intelligence); telecommunication traffic; transport protocols; C&C protocol; C4.5 learning algorithm based classifier; HTTP filters; HTTP protocol; IP flows; NetFlow; Softflowd; botnet behaviour analysis; command and communication protocol; cyber security; destructive threats; flow-based network traffic; machine learning algorithms; machine learning approach; naive Bayes algorithm; Classification algorithms; Complexity theory; Decision trees; Feature extraction; IP networks; Payloads; Protocols; botnet detection; machine learning based analysis; traffic IP-flow analysis (ID#: 15-6653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844605&isnumber=6844560


Rezvani, M.; Ignjatovic, A.; Bertino, E.; Jha, S., “Provenance-Aware Security Risk Analysis for Hosts and Network Flows,” Network Operations and Management Symposium (NOMS), 2014 IEEE, vol. no., pp. 1, 8, 5-9 May 2014. doi:10.1109/NOMS.2014.6838250
Abstract: Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
Keywords: computer network security; risk analysis; deep packet inspection; high risk hosts; high risk network flows; provenance aware security risk analysis; risk score; Computational modeling; Educational institutions; Iterative methods; Monitoring; Ports (Computers); Risk management; Security (ID#: 15-6654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838250&isnumber=6838210


Wenmin Xiao; Jianhua Sun; Hao Chen; Xianghua Xu, “Preventing Client Side XSS with Rewrite Based Dynamic Information Flow,” Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on, vol. no., pp. 238, 243, 13-15 July 2014. doi:10.1109/PAAP.2014.10
Abstract: This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
Keywords: Internet; Java; data flow analysis; online front-ends; security of data; JSTFlow; JavaScript abstract syntax tree; JavaScript code; Web applications; XSS attacks; abstract intermediate representation; browser proxy; browsers; client side XSS; code rewrite; fine-grained isolation; information flow tracking framework; performance overhead; rewrite based dynamic information flow; sensitive information leaks; taint engine; tainted information flow; Abstracts; Browsers; Data models; Engines; Security; Semantics; Syntactics; JavaScript; cross-site scripting; information flow analysis; information security; taint model (ID#: 15-6655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916471&isnumber=6916413


Ki-Jin Eom; Choong-Hyun Choi; Joon-Young Paik; Eun-Sun Cho, “An Efficient Static Taint-Analysis Detecting Exploitable-Points on ARM Binaries,” Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on, vol. no., pp. 345, 346, 6-9 Oct. 2014. doi:10.1109/SRDS.2014.66
Abstract: This paper aims to differentiate benign vulnerabilities from those used by cyber-attacks, based on STA (Static TaintAnalysis.) To achieve this goal, the proposed STA determines if a crash is from severe vulnerabilities, after analyzing related exploitable-points in ARM binaries. We envision that the proposed analysis would reduce the complexity of analysis, by making use of CPA (Constant Propagation Analysis) and runtime information of crash points.
Keywords: program diagnostics; security of data; ARM binaries; CPA; STA; benign vulnerabilities; constant propagation analysis; cyber-attacks; exploitable-points detection; runtime information; static taint-analysis; Reliability; ARM binary; IDA Pro plug-in; crash point; data flow analysis; exploitable; reverse engineering; taint Analysis (ID#: 15-6656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983415&isnumber=6983362


Alam, S.; Horspool, R.N.; Traore, I., “MARD: A Framework for Metamorphic Malware Analysis and Real-Time Detection,” Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, vol. no., pp. 480, 489,
13-16 May 2014. doi:10.1109/AINA.2014.59
Abstract: Because of the financial and other gains attached with the growing malware industry, there is a need to automate the process of malware analysis and provide real-time malware detection. To hide a malware, obfuscation techniques are used. One such technique is metamorphism encoding that mutates the dynamic binary code and changes the opcode with every run to avoid detection. This makes malware difficult to detect in real-time and generally requires a behavioral signature for detection. In this paper we present a new framework called MARD for Metamorphic Malware Analysis and Real-Time Detection, to protect the end points that are often the last defense, against metamorphic malware. MARD provides: (1) automation (2) platform independence (3) optimizations for real-time performance and (4) modularity. We also present a comparison of MARD with other such recent efforts. Experimental evaluation of MARD achieves a detection rate of 99.6% and a false positive rate of 4%.
Keywords: binary codes; digital signatures; encoding; invasive software; real-time systems; MARD; behavioral signature; dynamic binary code; malware analysis process automation; malware industry; metamorphic malware analysis and real-time detection; metamorphism encoding; obfuscation techniques; opcode; Malware; Optimization; Pattern matching; Postal services; Real-time systems; Runtime; Software; Automation; Control Flow Analysis; End Point Security; Malware Analysis and Detection; Metamorphism (ID#: 15-6657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838703&isnumber=6838626


Buiras, P.; Stefan, D.; Russo, A., “On Dynamic Flow-Sensitive Floating-Label Systems,” Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, vol. no., pp. 65, 79, 19-22 July 2014. doi:10.1109/CSF.2014.13
Abstract: Flow-sensitive analysis for information-flow control (IFC) allows data structures to have mutable security labels, i.e., labels that can change over the course of the computation. This feature is often used to boost the permissiveness of the IFC monitor, by rejecting fewer programs, and to reduce the burden of explicit label annotations. However, when added naively, in a purely dynamic setting, mutable labels can expose a high bandwidth covert channel. In this work, we present an extension for LIO-a language-based floating-label system-that safely handles flow-sensitive references. The key insight to safely manipulating the label of a reference is to not only consider the label on the data stored in the reference, i.e., the reference label, but also the label on the reference label itself. Taking this into consideration, we provide an upgrade primitive that can be used to change the label of a reference in a safe manner. To eliminate the burden of determining when a reference should be upgraded, we additionally provide a mechanism for automatic upgrades. Our approach naturally extends to a concurrent setting, not previously considered by dynamic flow-sensitive systems. For both our sequential and concurrent calculi, we prove non-interference by embedding the flow-sensitive system into the flow-insensitive LIO calculus, a surprising result on its own.
Keywords: data structures; security of data; IFC; LIO language; concurrent calculus; data structures; dynamic flow-sensitive floating-label systems; flow-sensitive analysis; flow-sensitive reference handling; information flow control; security labels; sequential calculus; Calculus; Context; Monitoring; Security; Semantics; Standards; Syntactics; Flow-sensitivity analysis; Haskell; concurrency; dynamic monitors; floating-label systems (ID#: 15-6658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957103&isnumber=6957090


Jinxin Ma; Guowei Dong; Puhan Zhang; Tao Guo, “SymWalker: Symbolic Execution in Routines of Binary Code,” Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, vol. no., pp. 694, 698, 15-16 Nov. 2014. doi:10.1109/CIS.2014.16
Abstract: Detecting vulnerabilities in binary codes is one of the most difficult problems due to the lack of type information and symbols. We propose a novel tool to perform symbolic execution inside the routines of binary codes, providing easy static analysis for vulnerability detection. Compared with existing systems, our tool has four properties: first, it could work on binary codes without source codes, second, it employs the VEX language for program analysis, thus having no side effects, third, it could deliver high coverage by statically executing on control flow graphs of disassembly codes, fourth, two security property rules are summarized to detect the corresponding vulnerabilities, based on which a convenient interface is provided for developers to detecting vulnerabilities, such as buffer overflow, improper memory access, and etc. Experimental results on real software binary files show that our tool could efficiently detect different types of vulnerabilities.
Keywords: binary codes; flow graphs; program diagnostics; programming languages; symbol manipulation; SymWalker; VEX language; binary code routines; binary code vulnerability detection; control flow graphs; disassembly codes; program analysis; security property rules; software binary files; source codes; static analysis; symbolic execution; Binary codes; Computer bugs; Computer languages; Computers; Registers; Security; Software; Symbolic execution; control flow analysis; security property; vulnerabilities (ID#: 15-6659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016986&isnumber=7016831


Junhyoung Kim; TaeGuen Kim; Eul Gyu Im, “Survey of Dynamic Taint Analysis,” Network Infrastructure and Digital Content
(IC-NIDC), 2014 4th IEEE International Conference on
, vol. no., pp. 269, 272, 19-21 Sept. 2014. doi:10.1109/ICNIDC.2014.7000307
Abstract: Dynamic taint analysis (DTA) is to analyze execution paths that an attacker may use to exploit a system. Dynamic taint analysis is a method to analyze executable files by tracing information flow without source code. DTA marks certain inputs to program as tainted, and then propagates values operated with tainted inputs. Due to the increased popularity of dynamic taint analysis, there have been a few recent research approaches to provide a generalized tainting infrastructure. In this paper, we introduce some approaches of dynamic taint analysis, and analyze their approaches. Lam and Chiueh's approach proposed a method that instruments code to perform taint marking and propagation. DYTAN considers three dimensions: taint source, propagation policies, taint sink. These dimensions make DYTAN to be more general framework for dynamic taint analysis. DTA++ proposes an idea to vanilla dynamic taint analysis that propagates additional taints along with targeted control dependencies. Control dependency causes results of taint analysis to have decreased accuracies. To improve accuracies, DTA++ showed that data transformation containing implicit flows should propagate properly to avoid under-tainting.
Keywords: data flow analysis; security of data; system monitoring; DTA++; DYTAN; attacker; control dependency; data transformation; dynamic taint analysis; executable files; execution paths; generalized tainting infrastructure; information flow tracing; propagation policies; taint marking; taint propagation; taint sink; taint source; Accuracy; Computer security; Instruments; Performance analysis; Software; Testing; dynamic taint analysis (ID#: 15-6660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000307&isnumber=7000253


Siyuan Jiang; Santelices, R.; Haipeng Cai; Grechanik, M., “How Accurate Is Dynamic Program Slicing? An Empirical Approach to Compute Accuracy Bounds,” Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, vol. no., pp. 3, 4, June 30 2014–July 2 2014. doi:10.1109/SERE-C.2014.14
Abstract: Dynamic program slicing attempts to find runtime dependencies among statements to support security, reliability, and quality tasks such as information-flow analysis, testing, and debugging. However, it is not known how accurately dynamic slices identify statements that really affect each other. We propose a new approach to estimate the accuracy of dynamic slices. We use this approach to obtain bounds on the accuracy of multiple dynamic slices in Java software. Early results suggest that dynamic slices suffer from some imprecision and, more critically, can have a low recall whose upper bound we estimate to be 60% on average.
Keywords: Java; data flow analysis; program debugging; program slicing; program testing; Java software; dynamic program slicing; information-flow analysis; quality tasks; reliability; runtime dependencies; security; software debugging; software testing; Accuracy; Reliability; Runtime; Security; Semantics; Software; Upper bound; dynamic slicing; program slicing; semantic dependence; sensitivity analysis (ID#: 15-6661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901632&isnumber=6901618


Yin XiaoHong, “The Research on Data Flow Technology in Computer Network Security Monitoring,” Advanced Research and Technology in Industry Applications (WARTIA), 2014 IEEE Workshop on, vol. no., pp. 787, 789, 29-30 Sept. 2014. doi:10.1109/WARTIA.2014.6976389
Abstract: With the rapid development of computer technology and application of Internet is becoming more and more widely, the Internet plays a more and more important role in people's life. At the same time, all kinds of network security events emerge in endlessly, seriously threaten the application and development of the Internet. With the purpose of safety, network monitoring, have more and more important significance in the maintenance of normal efficiently network run, key facilities, information system security, etc.,. How to realize effective network transmission and efficient online analysis to a huge number of distributed network security monitoring data so as to provide further support for a variety of applications become a major challenge in the field of network security and data processing.
Keywords: Internet; computer network security; data flow analysis; query processing; Internet; computer network security monitoring; data flow technology; distributed network security monitoring; network transmission; Communication networks; Data models; Data processing; Distributed databases; Monitoring; Real-time systems; Security; Cost Efficient Processing; Distributed Data Stream; Multi-query Optimization; Network Security Monitoring; Stream Cube (ID#: 15-6662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976389&isnumber=6976172


Li Feng; McMillin, B., “Quantification of Information Flow in a Smart Grid,” Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, vol. no., pp. 140,145, 21-25 July 2014. doi:10.1109/COMPSACW.2014.27
Abstract: The key to computer security is the notion of information flow. Information flow occurs either explicitly or implicitly in a system. In cyber-physical systems (CPSs), complicated interactions occur frequently between computational components and physical components. Thus, detecting and quantifying information flow in these systems is more difficult than it is in purely cyber systems. In CPSs, failures and attacks are either from the physical infrastructure, or from cyber part of data management and communication protocol, or a combination of both. As the physical infrastructure is inherently observable, aggregated physical observations can lead to unintended cyber information leakage. The computational portion of a CPS is driven by algorithms. Within algorithmic theory, the online problem considers input that arrives one by one and deals with extracting the algorithmic solution through an advice tape without knowing some parts of input. In this paper, a smart grid CPS is examined from an information flow perspective, physical values constitute an advice tape. As such, system confidentiality is violated through cyber to physical information flow. An approach is generalized to quantify the information flow in a CPS.
Keywords: data flow analysis; power engineering computing; security of data; smart power grids; computer security; cyber-physical systems; cyber-to-physical information flow; information flow quantification; smart grid CPS; system confidentiality; Algorithm design and analysis; Entropy; Load management; Observers; Security; Smart grids; Uncertainty; Advice Tape; Information Flow; Online Problem; Quantification (ID#: 15-6663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903119&isnumber=6903069


Lovat, E.; Kelbert, F., “Structure Matters — A New Approach for Data Flow Tracking,” Security and Privacy Workshops (SPW), 2014 IEEE, vol. no., pp. 39, 43, 17-18 May 2014. doi:10.1109/SPW.2014.15
Abstract: Usage control (UC) is concerned with how data may or may not be used after initial access has been granted. UC requirements are expressed in terms of data (e.g. a picture, a song) which exist within a system in forms of different technical representations (containers, e.g. files, memory locations, windows). A model combining UC enforcement with data flow tracking across containers has been proposed in the literature, but it exhibits a high false positives detection rate. In this paper we propose a refined approach for data flow tracking that mitigates this over approximation problem by leveraging information about the inherent structure of the data being tracked. We propose a formal model and show some exemplary instantiations.
Keywords: data flow analysis; data flow computing; UC enforcement; containers; data access; data flow tracking; false positive detection rate; formal model; information leveraging; inherent data structure; over-approximation problem mitigation; technical representations; usage control; Containers; Data models; Discrete Fourier transforms; Operating systems; Postal services; Security; Semantics; data structure (ID#: 15-6664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957282&isnumber=6957265


Hossen, K.; Groz, R.; Oriat, C.; Richier, J.-L., “Automatic Model Inference of Web Applications for Security Testing,” Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, vol. no., pp. 22, 23, March 31 2014–April 4 2014. doi:10.1109/ICSTW.2014.47
Abstract: In the Internet of services (IoS), web applications are the most common way to provide resources to the users. The complexity of these applications grew up with the number of different development techniques and technologies used. Model-based testing (MBT) has proved its efficiency in software testing but retrieving the corresponding model of an application is still a complex task. In this paper, we propose an automatic and vulnerability-driven model inference approach to model the relevant aspects of a web applications by combining deep web crawling and model inference based on input sequences.
Keywords: Internet; data flow analysis; Inference mechanisms; program testing; security of data; Internet of services; IoS; MBT; Web applications; automatic model inference approach; deep Web crawling; input sequences; model-based testing; security testing; software testing; vulnerability-driven model inference approach; Automata; Conferences; Inference algorithms; Machine learning algorithms; Modeling; Security; Testing; Control Flow Inference; Data-Flow Inference; Reverse-Engineering; Security; Web Application (ID#: 15-6665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825633&isnumber=6825623


Tsigkanos, C.; Pasquale, L.; Menghi, C.; Ghezzi, C.; Nuseibeh, B., “Engineering Topology Aware Adaptive Security: Preventing Requirements Violations at Runtime,” Requirements Engineering Conference (RE), 2014 IEEE 22nd International, vol. no., pp. 203, 212, 25-29 Aug. 2014. doi:10.1109/RE.2014.6912262
Abstract: Adaptive security systems aim to protect critical assets in the face of changes in their operational environment. We have argued that incorporating an explicit representation of the environment's topology enables reasoning on the location of assets being protected and the proximity of potentially harmful agents. This paper proposes to engineer topology aware adaptive security systems by identifying violations of security requirements that may be caused by topological changes, and selecting a set of security controls that prevent such violations. Our approach focuses on physical topologies; it maintains at runtime a live representation of the topology which is updated when assets or agents move, or when the structure of the physical space is altered. When the topology changes, we look ahead at a subset of the future system states. These states are reachable when the agents move within the physical space. If security requirements can be violated in future system states, a configuration of security controls is proactively applied to prevent the system from reaching those states. Thus, the system continuously adapts to topological stimuli, while maintaining requirements satisfaction. Security requirements are formally expressed using a propositional temporal logic, encoding spatial properties in Computation Tree Logic (CTL). The Ambient Calculus is used to represent the topology of the operational environment — including location of assets and agents — as well as to identify future system states that are reachable from the current one. The approach is demonstrated and evaluated using a substantive example concerned with physical access control.
Keywords: authorisation; data flow analysis; formal specification; temporal logic; access control; adaptive security systems; ambient calculus; computation tree logic; encoding spatial properties; potentially harmful agents; propositional temporal logic; requirements violation prevention; security controls; security requirements; topology aware adaptive security engineering; Aerospace electronics; Buildings; Calculus; Runtime; Security; Servers; Topology (ID#: 15-6666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912262&isnumber=6912234


Zhang Puhan; Wu Jianxiong; Wang Xin; Wu Zehui, “Decrypted Data Detection Algorithm Based on Dynamic Dataflow Analysis,” Computer, Information and Telecommunication Systems (CITS), 2014 International Conference on, vol. no., pp. 1,4, 7-9 July 2014. doi:10.1109/CITS.2014.6878965
Abstract: Cryptographic algorithm detection has received a lot of attentions in these days, whereas the method to detect decrypted data remains further research. A decrypted memory detection method using dynamic dataflow analysis is proposed in this paper. Based on the intuition that decrypted data is generated in the cryptographic function and the unique feature of decrypted data, by analyzing the parameter sets of cryptographic function, we propose a model based on the input and output of cryptographic function. Experimental results demonstrate that our approach can effectively detect decrypted memory.
Keywords: cryptography; data flow analysis; cryptographic algorithm detection; decrypted data; decrypted memory detection method; dynamic dataflow analysis; Algorithm design and analysis; Encryption; Heuristic algorithms; Software; Software algorithms; Cryptographic; Dataflow analysis; Decrypted memory; Taint analysis (ID#: 15-6667)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878965&isnumber=6878950


Pena, E.H.M.; Barbon, S.; Rodrigues, J.J.P.C.; Lemes Proenca Junior, M., “Anomaly Detection Using Digital Signature of Network Segment with Adaptive ARIMA Model and Paraconsistent Logic,” Computers and Communication (ISCC), 2014 IEEE Symposium on, vol. no., pp. 1, 6, 23-26 June 2014. doi:10.1109/ISCC.2014.6912503
Abstract: Detecting anomalies accurately in network traffic behavior is essential for a variety of network management and security tasks. This paper presents an anomaly detection approach employing Digital Signature of Network Segment using Flow Analysis (DSNSF), generated with an ARIMA model. Also, a functional algorithm based on a non-classical logic called Paraconsistent Logic is proposed aiming to avoid high false alarms rates. The key idea of the proposed approach is to characterize the normal behavior of network traffic and then identify the traffic patterns behavior that might harm networks services. Experimental results on a real network demonstrate the effectiveness the proposed approach. The results are promising, showing that the flow analysis performed is able to detect anomalous traffic with precision, sensitivity and good performance.
Keywords: autoregressive moving average processes; digital signatures; DSNSF; adaptive ARIMA model; anomaly detection; digital signature of network segment using flow analysis; network management; network traffic behavior; paraconsistent logic; traffic patterns behavior; Analytical models; Autoregressive processes; Correlation; Data models; Digital signatures; Equations; Mathematical model (ID#: 15-6668)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912503&isnumber=6912451


Camacho, J.; Macia-Fernandez, G.; Diaz-Verdejo, J.; Garcia-Teodoro, P., “Tackling the Big Data 4 Vs for Anomaly Detection,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol. no., pp. 500, 505,
April 27 2014–May 2 2014. doi:10.1109/INFCOMW.2014.6849282
Abstract: In this paper, a framework for anomaly detection and forensics in Big Data is introduced. The framework tackles the Big Data 4 Vs: Variety, Veracity, Volume and Velocity. The varied nature of the data sources is treated by transforming the typically unstructured data into a highly dimensional and structured data set. To overcome both the uncertainty (low veracity) and high dimension introduced, a latent variable method, in particular Principal Component Analysis (PCA), is applied. PCA is well known to present outstanding capabilities to extract information from highly dimensional data sets. However, PCA is limited to low size, thought highly multivariate, data sets. To handle this limitation, a kernel computation of PCA is employed. This avoids computational problems due to the size (number of observations) in the data sets and allows parallelism. Also, hierarchical models are proposed if dimensionality is extreme. Finally, to handle high velocity in analyzing time series data flows, the Exponentially Weighted Moving Average (EWMA) approach is employed. All these steps are discussed in the paper, and the VAST 2012 mini challenge 2 is used for illustration.
Keywords: Big Data; digital forensics; firewalls; moving average processes; principal component analysis; time series; Big Data 4 Vs; EWMA approach; PCA; anomaly detection; computational problems; data sources; exponentially weighted moving average approach; forensics; hierarchical models; highly-dimensional structured data set; information extraction; kernel computation; latent variable method; parallelism; principal component analysis; time series data flow analysis; uncertainty problem; unstructured data transformation; variety; velocity; veracity; volume; Big data; Computational modeling; Conferences; Data privacy; Data visualization; Principal component analysis; Security (ID#: 15-6669)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849282&isnumber=6849127


Stevanovic, M.; Pedersen, J.M., “An Efficient Flow-Based Botnet Detection Using Supervised Machine Learning,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol. no., pp. 797, 801, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785439
Abstract: Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework.
Keywords: computer network security; invasive software; learning (artificial intelligence); peer-to-peer computing; telecommunication traffic; P2P botnets; botnet neutralization; flow-based botnet detection; flow-based traffic analysis; malicious botnet network traffic identification; nonmalicious applications; packet flow; supervised machine learning; Accuracy; Bayes methods; Feature extraction; Protocols; Support vector machines; Training; Vegetation; Botnet; Botnet detection; Machine learning; Traffic analysis; Traffic classification (ID#: 15-6670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785439&isnumber=6785290


Cui Baojiang; Long Baolian; Hou Tingting, “Reverse Analysis Method of Static XSS Defect Detection Technique Based on Database Query Language,” P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, vol. no., pp. 487, 491, 8-10 Nov. 2014. doi:10.1109/3PGCIC.2014.99
Abstract: Along with the wide use of web application, XSS vulnerability has become one of the most common security problems and caused many serious losses. In this paper, on the basis of database query language technique, we put forward a static analysis method of XSS defect detection of java web application by analyzing data flow reversely. This method first converts the JSP file to a Servlet file, and then uses the mock test method to generate calls for all Java code automatically for comprehensive analysis. Originated from the methods where XSS security defect may occur, we analyze the data flow reversely to detect XSS defect by judging whether it can be introduced by user input without filter. This reverse method has effectively reduced analyzing tasks which are necessary in forward ways. It was proved by experiments on artificially constructed Java web project with XSS flaws and some open source Java web projects, this method not only improved the efficiency of detection, but also improved the detection accuracy for XSS defect.
Keywords: Internet; Java; query languages; query processing; security of data; JSP file; Java Web application; Servlet file; XSS vulnerability; data flow reverse analysis method; database query language; mock test method; static XSS defect detection technique; Accuracy; Browsers; Context; Databases; Educational institutions; Security; XSS defect; reverse analysis; static analysis; web application (ID#: 15-6671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024633&isnumber=7024297


Xin Xie; Fenlin Liu; Bin Lu; Fei Xiang, “Mixed Obfuscation of Overlapping Instruction and Self-Modify Code Based on Hyper-Chaotic Opaque Predicates,” Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, vol. no.,
pp. 524, 528, 15-16 Nov. 2014. doi:10.1109/CIS.2014.45
Abstract: Static disassembly is used to analyze program control flow that is the key process of reverse analysis. Aiming at the problem that attackers are always using static disassembly to analyze control transfer instructions and control flow graph, a mixed obfuscation of overlapping instruction and self-modify code based on hyper-chaotic opaque predicates is proposed, jump offsets in overlapping instructions and data offsets in self-modify code are constructed with opaque predicates. Control transfer instructions are modified into control transfer unrelated ones with the combination of characteristics of overlapping instruction and self-modify code. Experiments and analysis show that control flow graph can be obfuscated by mixed obfuscation due to the difficulty of hyper-chaotic opaque predicates for attackers to analyze.
Keywords: program control structures; safety-critical software; software engineering; code obfuscation; control flow graph; control transfer instructions; hyper-chaotic opaque predicates; program control flow analyze; reverse analysis; self-modify code; Chaos; Flow graphs; Resistance; Resists; Software; Watermarking; code obfuscation; hyper-chaotic opaque predicate; overlapping instruction; self-modify code (ID#: 15-6672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016951&isnumber=7016831


Ippoliti, D.; Xiaobo Zhou, “Online Adaptive Anomaly Detection for Augmented Network Flows,” Modelling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), 2014 IEEE 22nd International Symposium on, vol. no.,
pp. 433, 442, 9-11 Sept. 2014. doi:10.1109/MASCOTS.2014.60
Abstract: Traditional network anomaly detection involves developing models that rely on packet inspection. Increasing network speeds and use of encrypted protocols make per-packet inspection unsuited for today's networks. One method of overcoming this obstacle is flow based analysis. Many existing approaches are special purpose, i.e., limited to detecting specific behavior. Also, the data reduction inherent in identifying anomalous flows hinders alert correlation. In this paper we propose a dynamic anomaly detection approach for augmented flows. We sketch network state during flow creation enabling general purpose threat detection. We design and develop a support vector machine based adaptive anomaly detection and correlation mechanism capable of aggregating alerts without a-priori alert classification and evolving models online. We develop a confidence forwarding mechanism identifying a small percentage predictions for additional processing. We show effectiveness of our methods on both enterprise and backbone traces. Experimental results demonstrate the ability to maintain high accuracy without the need for offline training.
Keywords: computer network security; support vector machines; alert aggregation; alert correlation; anomalous flow identification; augmented flows; augmented network flows; backbone traces; confidence forwarding mechanism; data reduction; dynamic anomaly detection approach; enterprise traces; flow based analysis; flow creation; general purpose threat detection; network anomaly detection; network state; online adaptive anomaly detection; packet inspection; support vector machine based adaptive anomaly detection mechanism; support vector machine based adaptive correlation mechanism; Adaptation models; Correlation; Detectors; Inspection; Support vector machines; Training; Vectors (ID#: 15-6673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033682&isnumber=7033621


Sen, S.; Guha, S.; Datta, A.; Rajamani, S.K.; Tsai, J.; Wing, J.M., “Bootstrapping Privacy Compliance in Big Data Systems,” Security and Privacy (SP), 2014 IEEE Symposium on, vol. no., pp. 327, 342, 18-21 May 2014. doi:10.1109/SP.2014.28
Abstract: With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Keywords: Big Data; Web services; cloud computing; computer bootstrapping; conformance testing; data privacy; parallel programming; search engines; source code (software); Bing; Grok data inventory; Legal ease language; Map-Reduce-like Big Data systems; automatic privacy policy compliance checking; business imperative privacy policies; cloud services; code-level schema element mapping; datatypes; information flow types; minimal human input; personalized user experiences; privacy compliance bootstrapping; privacy policy specification; program annotation; source code; user data handling; user trust; Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis (ID#: 15-6674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956573&isnumber=6956545
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Location Privacy—Authentication Approaches, 2014

 

 
SoS Logo

Location Privacy—Authentication Approaches

2014


Location-based services have proven popular both with end users and with distributed systems operators. The research presented here looks at protecting privacy on these systems using authentication-based methods. The work was published in 2014.



Yingjie Chen; Wei Wang; Qian Zhang, “Privacy-Preserving Location Authentication in WiFi with Fine-Grained Physical Layer Information,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 4827, 4832, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037570
Abstract: The surging deployment of WiFi hotspots in public places drives the blossoming of location-based services (LBSs) available. A recent measurement reveals that a large portion of the reported locations are either forged or superfluous, which calls attention to location authentication. However, existing authentication approaches breach user's location privacy, which is of wide concern of both individuals and governments. In this paper, we propose PriLA, a privacy-preserving location authentication protocol that facilitates location authentication without compromising user's location privacy in WiFi networks. PriLA exploits physical layer information, namely carrier frequency offset (CFO) and multipath profile, from user's frames. In particular, PriLA leverages CFO to secure wireless transmission between the mobile user and the access point (AP), and meanwhile authenticate the reported locations without leaking the exact location information based on the coarse-grained location proximity being extracted from user's multipath profile. Existing privacy preservation techniques on upper layers can be applied on top of PriLA to enable various applications. We have implemented PriLa on GNURadio/USRP platform and off-the-shelf Intel 5300 NIC. The experimental results demonstrate the practicality of CFO injection and accuracy of multipath profile based location authentication in a real-world environment.
Keywords: computer crime; computer network security; cryptographic protocols; mobile radio; wireless LAN; AP; CFO injection; GNUradio platform; LBS; PriLA; USRP platform; Wi-Fi hotspot; access point; carrier frequency offset; coarse-grained location proximity; fine-grained physical layer information; location forgery; location superfluousness; location-based service; mobile user location privacy; multipath profile; off-the-shelf Intel 5300 NIC; privacy preservation technique; privacy-preserving location authentication protocol; secure wireless transmission; Authentication; Encryption; IEEE 802.11 Standards; Mobile communication; OFDM; Privacy; Wireless communication (ID#: 15-6400)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037570&isnumber=7036769


Hussain, M., “An Authentication Scheme to Protect the Location Privacy of Femtocell Users,” Computer Systems and Applications (AICCSA), 2014 IEEE/ACS 11th International Conference on, vol., no., pp. 652, 657, 10-13 Nov. 2014. doi:10.1109/AICCSA.2014.7073261
Abstract: Femtocells are small cellular base-stations, suitable for residential units or business offices. Femtocells are cost-effective solution for areas where deploying traditional base-stations is costly. Femtocells inherits security and privacy threats of GSM, and UMTS networks such as location privacy and tracking. These threats are even more severe, since the deployment of femtocells, which covers areas as small as an office, allows for an unprecedented tracking of mobile users location. This paper presents an authentication scheme, which allows a mobile user to use an open femtocell, while making it hard for its mobile operator to know the exact location of that mobile user. The scheme complements the privacy protection of UMTS. Further, the scheme enables mobile operators to reward owners of open femtocells. The paper discusses the security of the presented scheme. The simulation of the authentication scheme shows the feasibility of our work.
Keywords: 3G mobile communication; data protection; femtocellular radio; mobility management (mobile radio); telecommunication security; GSM network security; UMTS network; authentication scheme; cellular base station; cost-effective solution; femtocell user; location privacy protection; mobile operator; mobile user location tracking; 3G mobile communication; Authentication; Cryptography; Femtocells; Privacy; femtocell security; femtocells; location privacy (ID#: 15-6401)
URL:  http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073261&isnumber=7073167


Saracino, A.; Sgandurra, D.; Spagnuelo, D., “Addressing Privacy Issues in Location-Based Collaborative and Distributed Environments,” Collaboration Technologies and Systems (CTS), 2014 International Conference on, vol., no., pp. 166, 172, 19-23 May 2014. doi:10.1109/CTS.2014.6867560
Abstract: In the past few years collaborative environments have been growing fast thanks to the ubiquitousness of smartphones and to their rich features. These devices are nowadays very sophisticated by being able to receive GPS signal, communicate with other devices through mobile network, and to analyze several different kinds of data received with their sensors. In some collaborative environments, users need to access the correct geo-location, for example when they collaboratively contribute to build a collection of data about specific objects, such as for traffic news. On other hand, sharing the exact location may imply violations to the user privacy. In this paper we discuss the importance of the correct location in collaborative environments and we address the problem of privacy for users and show how current solutions, which aim to preserve the user privacy, can interfere with the correct behavior of some applications. We also propose a novel approach to provide the correct location to the collaborative network only when this is needed, which preserves the user privacy.
Keywords: data privacy; groupware; mobile computing; smart phones; GPS signal; Global Positioning System; collaborative environment; data collection; distributed environment; location-based collaborative environment; privacy issues; smart phones; user privacy; Authentication; Collaboration; Privacy; Sensors; Smart phones; Software; Android; location; mobile system; privacy (ID#: 15-6402)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867560&isnumber=6867522


Hui-Feng Huang; Po-Kai Yu; Kuo-Ching Liu, “A Privacy and Authentication Protocol for Mobile RFID System,” Independent Computing (ISIC), 2014 IEEE International Symposium on, vol., no., pp. 1, 6, 9-12 Dec. 2014. doi:10.1109/INDCOMP.2014.7011754
Abstract: Since information communication via radio transmission can be easily eavesdropped, therefore, many radio frequency identification (RFID) security mechanisms for location privacy protection have been proposed recently. However, most of previously proposed schemes do not conform to the EPC Class-1 GEN-2 standard for passive RFID tags as they require the implementation of hash functions on the tags. In 2013, Doss et al. proposed the mutual authentication for the tag, the reader, and the back-end server in the RFID system. Their scheme is the first quadratic residues based to achieve compliance to EPC Class-1 GEN-2 specification and the security of the server-reader channel may not be guaranteed. However, this article will show that the computational requirements and bandwidth consumption are quite demanding in Doss et al.'s scheme. To improve Doss et al.'s protocol, this article proposes a new efficient RFID system where both the tag-reader channel and the reader-server channel are insecure. The proposed method is not only satisfies all the security requirements for the reader and the tag but also achieve compliance to EPC Class-1 GEN-2 specifications. Moreover, the proposed scheme can be used in a large-scale RFID system.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; Doss protocol; EPC Class-1 GEN-2 standard; authentication protocol; back-end server; bandwidth consumption; computational requirement; hash functions; mobile RFID system; mutual authentication; passive RFID tags; privacy protocol; server-reader channel; Authentication; Databases; Generators; Privacy; Radiofrequency identification; Servers; Location privacy Introduction; Mutual authentication; RFID (ID#: 15-6403)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011754&isnumber=7011735


Guo Yunchuan; Yin Lihua; Liu Licai; Fang Binxing, “Utility-Based Cooperative Decision in Cooperative Authentication,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 1006, 1014, April 27 2014–May 2 2014. doi:10.1109/INFOCOM.2014.6848030
Abstract: In mobile networks, cooperative authentication is an efficient way to recognize false identities and messages. However, an attacker can track the location of cooperative mobile nodes by monitoring their communications. Moreover, mobile nodes consume their own resources when cooperating with other nodes in the process of authentication. These two factors cause selfish mobile nodes not to actively participate in authentication. In this paper, a bargaining-based game for cooperative authentication is proposed to help nodes decide whether to participate in authentication or not, and our strategy guarantees that mobile nodes participating in cooperative authentication can obtain the maximum utility, all at an acceptable cost. We obtain Nash equilibrium in static complete information games. To address the problem of nodes not knowing the utility of other nodes, incomplete information games for cooperative authentication are established. We also develop an algorithm based on incomplete information games to maximize every node's utility. The simulation results demonstrate that our strategy has the ability to guarantee authentication probability and increase the number of successful authentications.
Keywords: game theory; mobile ad hoc networks; probability; telecommunication security; MANET; Nash equilibrium; authentication probability; authentication process; cooperative authentication; cooperative mobile nodes; information games; mobile ad hoc network; mobile networks; mobile nodes; utility based cooperative decision; Bismuth; Computers; Conferences; High definition video; Human computer interaction; Cooperative authentication; games; location privacy (ID#: 15-6404)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848030&isnumber=6847911


Rongxing Lu; Xiaodong Lin; Zhiguo Shi; Jun Shao, “PLAM: A Privacy-Preserving Framework for Local-Area Mobile Social Networks,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 763, 771, April 27 2014–May 2 2014. doi:10.1109/INFOCOM.2014.6848003
Abstract: In this paper, we propose a privacy-preserving framework, called PLAM, for local-area mobile social networks. The proposed PLAM framework employs a privacy-preserving request aggregation protocol with k-Anonymity and l-Diversity properties while without involving a trusted anonymizer server to keep user preference privacy when querying location-based service (LBS), and integrates unlinkable pseudo-ID technique to achieve user identity privacy, location privacy. Moreover, the proposed PLAM framework also introduces the privacy-preserving and verifiable polynomial computation to keep LBS provider's functions private while preventing the provider from cheating in computation. Detailed security analysis shows that the proposed PLAM framework can not only achieve desirable privacy requirements but also resist outside attacks on source authentication, data integrity and availability. In addition, extensive simulations are also conducted, and simulation results guide us on how to set proper thresholds for k-anonymity, l-diversity to make a tradeoff between the desirable user preference privacy level and the request delay in different scenarios.
Keywords: cryptography; data integrity; data privacy; local area networks; mobile computing; polynomials; protocols; social networking (online); trusted computing; LBS; LBS provider functions; PLAM framework; data availability; data integrity; k-anonymity; l-diversity; local-area mobile social networks; location privacy; location-based service; privacy-preserving request aggregation protocol; request delay; security analysis; source authentication; trusted anonymizer server; unlinkable pseudo-ID technique; user identity privacy; user preference privacy level; verifiable polynomial computation; Data privacy; Mobile communication; Mobile computing; Polynomials; Privacy; Protocols; Security; Privacy-preserving; location-based services; mobile social network; preference privacy (ID#: 15-6405)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848003&isnumber=6847911


Sam, M.M.; Vijayashanthi, N.; Sundhari, A., “An Efficient Pseudonymous Generation Scheme with Privacy Preservation for Vehicular Communication,” Intelligent Computing Applications (ICICA), 2014 International Conference on, vol., no., pp. 109, 117, 6-7 March 2014. doi:10.1109/ICICA.2014.32
Abstract: Vehicular Ad-Hoc Network (VANET) communication has recently become an increasingly popular research topic in the area of wireless networking as well as the automotive industries. The goal of VANET research is to develop a vehicular communication system to enable quick and costefficient distribution of data for the benefit of passengers safety and comfort. But location privacy in vanet is still an imperative issue. To overcome this privacy, a popular approach that is recommended in vanet is that vehicles periodically change their pseudonyms when they broadcast safety messages. An Effective pseudonym changing at proper location(e.g., a road intersection when the traffic light turns red or a free parking lot near a shopping mall) (PCP) strategy to achieve the provable location privacy. In addition, we use Bilinear Pairing for self-delegated key generation. Current threat model primarily considers that an adversary can track a vehicle that can utilize more character factors to track a vehicle and to explore new location-privacy-enhanced techniques under such a stronger threat model.
 Keywords: telecommunication security; vehicular ad hoc networks; VANET communication; VANET research; bilinear pairing; effective pseudonym changing; location-privacy-enhanced techniques; privacy preservation; pseudonymous generation scheme; road intersection; self-delegated key generation; vehicular ad-hoc network; vehicular communication; vehicular communication system; wireless networking; Analytical models; Authentication; Privacy; Roads; Safety; Vehicles; Vehicular ad hoc networks; Group- Signature-Based (GSB); Pseudonym Changing at Proper Location (PCP); RoadSide Units (RSUs); Trusted Authority (TA) (ID#: 15-6406)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6965022&isnumber=6964987


Liu Licai; Yin Lihua; Guo Yunchuan; Fang Bingxing, “Bargaining-Based Dynamic Decision for Cooperative Authentication in MANETs,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 212, 220, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.32
Abstract: In MANETs, cooperative authentication, requiring cooperation of neighbor nodes, is a significant authenticate technique. However, when nodes participate in cooperation, their location may easily be tracked by misbehaving nodes, meanwhile, their resources will be consumed. These two factors lead selfish nodes reluctant participate in cooperation and decrease the probability of correct authentication. To encourage nodes to take part in cooperation, we proposed a bargaining-based dynamic game model for cooperative authentication to analyze dynamic behaviors of nodes and help nodes decide whether to participate in cooperation or not. Further, to analyze the dynamic decision-making of nodes, we discussed two situations — complete information and incomplete information, respectively. Under complete information, Sub game Perfect Nash Equilibriums are obtained to guide nodes to choose its optimal strategy to maximize its utility. In reality, nodes often do not have good knowledge about others’ utility (this case is often called incomplete information). To dealt with this case, Perfect Bayesian Nash Equilibrium is established to eliminate the implausible Equilibriums. Based on the model, we designed two algorithms for complete information and incomplete information, and the simulation results demonstrate that in our model nodes participating in cooperation will maximize their location privacy and minimize their resources consumption with ensuing the probability of correct authentication. Both of algorithms can improve the success rate of cooperative authentication and extend the network lifetime to 160%-360.6%.
Keywords: cooperative communication; decision making; game theory; message authentication; mobile ad hoc networks; probability; telecommunication security; MANET; bargaining-based dynamic decision; bargaining-based dynamic game model; cooperative authentication; dynamic decision-making; location privacy; mobile ad hoc networks; network lifetime; perfect Bayesian Nash equilibrium; resources consumption; subgame perfect Nash equilibriums; Ad hoc networks; Authentication; Games; Mobile computing; Principal component analysis; Privacy; Vehicle dynamics; Cooperative Authentication; Dynamic Game; Incentive Strategy; MANET (ID#: 15-6407)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011253&isnumber=7011202


Kounelis, I.; Muftic, S.; Loschner, J., “Secure and Privacy-Enhanced E-Mail System Based on the Concept of Proxies,” Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, vol., no., pp. 1405, 1410, 26-30 May 2014. doi:10.1109/MIPRO.2014.6859787
Abstract: Security and privacy on the Internet and especially the e-mail, is becoming more and more important and crucial for the user. The requirements for the protection of e-mail include issues like tracking and privacy intrusions by hackers and commercial advertisers, intrusions by casual observers, and even spying by government agencies. In an expanding email use in the digital world, Internet and mobile, the quantity and sensitivity of personal information has also tremendously expanded. Therefore, protection of data and transactions and privacy of user information is key and of interest for many users. Based on such motives, in this paper we present the design and current implementation of our secure and privacy-enhanced e-mail system. The system provides protection of e-mails, privacy of locations from which the e-mail system is accessed, and authentication of legitimate users. Differently from existing standard approaches, which are based on adding security extensions to e-mail clients, our system is based on the concept of proxy servers that provide security and privacy of users and their e-mails. It uses all required standards: S/MIME for formatting of secure letters, strong cryptographic algorithms, PKI protocols and certificates. We already have the first implementation and an instance of the system is very easy to install and to use.
Keywords: Internet; cryptographic protocols; data privacy; electronic mail; public key cryptography; Internet; PKI protocols; S-MIME; casual observers; commercial advertisers; cryptographic algorithms; digital world; government agencies; legitimate user authentication; locations privacy; privacy intrusions; privacy-enhanced e-mail system; proxy concept; secure letters; security extensions; tracking intrusions; user information privacy; Cryptography; Electronic mail; Postal services; Privacy; Servers; Standards; E-mail; PKI; Proxy Server; S/MIME; X.509 certificates (ID#: 15-6408)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859787&isnumber=6859515


Hongyang Li; Dan, G.; Nahrstedt, K., “Portunes: Privacy-Preserving Fast Authentication for Dynamic Electric Vehicle Charging,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 920, 925, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007766
Abstract: Dynamic contactless charging is an emerging technology for charging electric vehicles (EV) on the move. For efficient charging and for proper billing, dynamic charging requires secure communication between the charging infrastructure and the EVs that supports very frequent real-time message exchange for EV authentication. In this paper we propose Portunes, an authentication protocol for charging pads to authenticate an EV’s identity. Portunes uses pseudonyms to provide location privacy, allows EVs to roam between different charging sections and receive a single bill, and achieves fast authentication by relying on symmetric keys and on the spatio-temporal location of the EV. We have implemented Portunes on RaspberryPi Model B with 700 MHz CPU and 512 MB RAM. Portunes allows the EV to generate authentication information within 0.3 ms, and allows charging pads to verify the information within 0.5 ms. In comparison, ECDSA signature generation and verification take over 25 ms and over 40 ms respectively.
Keywords: battery powered vehicles; message authentication; power engineering computing; EV authentication; Portunes; RaspberryPi Model B; dynamic contactless charging; dynamic electric vehicle charging; privacy-preserving fast authentication; Authentication; Public key; Roads; Switches; Synchronization; Wireless communication (ID#: 15-6409)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007766&isnumber=7007609


Li Li; Jun Pang; Yang Liu; Jun Sun; Jin Song Dong, “Symbolic Analysis of an Electric Vehicle Charging Protocol,” Engineering of Complex Computer Systems (ICECCS), 2014 19th International Conference on, vol., no., pp. 11, 18, 4-7 Aug. 2014. doi:10.1109/ICECCS.2014.11
Abstract: In this paper, we describe our analysis of a recently proposed electric vehicle charing protocol. The protocol builds on complicated cryptographic primitives such as commitment, zero-knowledge proofs, BBS+ signature and etc. Moreover, interesting properties such as secrecy, authentication, anonymity, and location privacy are claimed on this protocol. It thus presents a challenge for formal verification, as existing tools for security protocol analysis lack support for all the required features. In our analysis, we employ and combine the strength of two state-of-the-art symbolic verifiers, Tamarin and Prove if, to check all important properties of the protocol.
Keywords: cryptographic protocols; electric vehicles; electrical engineering computing; formal verification; electric vehicle charging protocol; security protocol analysis; symbolic analysis; Authentication; Cryptography; Educational institutions; Electric vehicles; Privacy; Protocols; anonymity location; authentication; privacy; secrecy; symbolic verification (ID#: 15-6410)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6923113&isnumber=6923102


Khatkar, M.; Phogat, N.; Kumar, B., “Reliable Data Transmission In Anonymous Location Aided Routing in MANET By Preventing Replay Attack,” Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on, vol., no., pp. 1, 6, 8-10 Oct. 2014. doi:10.1109/ICRITO.2014.7014731
Abstract: Privacy and security are major issues in MANET, especially when used in sensitive areas. Secure routing protocols have been developed/proposed by researchers to provide security and privacy at various levels. ALARM protocol (Anonymous Location Aided Routing in MANET) provides both privacy and security features including confidentiality, authentication and authorization. Location based routing is based on some assumptions in MANET ie location of the mobile nodes (using GPS), Time Clock of mobile nodes are loosely synchronized, mobility and Nodes has uniform transmission range. In the current work an effort has been done to review the ALARM protocol and identify some of the security problems in MANET. Further the work suggests a mechanism to prevent malicious activity (replay attack) in MANET using monitoring method.
Keywords: data privacy; mobile ad hoc networks; routing protocols; synchronisation; telecommunication network reliability; telecommunication security; ALARM protocol; GPS; MANET; anonymous location aided routing protocol; data transmission reliability; malicious activity prevention; privacy feature; replay attack prevention; security feature; time clock synchronization; Authentication; Mobile ad hoc networks; Monitoring; Protocols; Routing; Synchronization; Alarm Protocol; Monitoring; Prevention; Replay attack (ID#: 15-6411)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014731&isnumber=7014644


Mustafa, M.A.; Ning Zhang; Kalogridis, G.; Zhong Fan, “Roaming Electric Vehicle Charging and Billing: An Anonymous Multi-User Protocol,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 939, 945, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007769
Abstract: In this paper, we propose a secure roaming electric vehicle (EV) charging protocol that helps preserve users’ privacy. During a charging session, a roaming EV user uses a pseudonym of the EV (known only to the user’s contracted supplier) which is anonymously signed by the user’s private key. This protocol protects the user’s identity privacy from other suppliers as well as the user’s privacy of location from its own supplier. Further, it allows the user’s contracted supplier to authenticate the EV and the user. Using two-factor authentication approach a multiuser EV charging is supported and different legitimate EV users (e.g. family members) can be held accountable for their charging sessions. With each charging session, the EV uses a different pseudonym which prevents adversaries from linking the different charging sessions of the EV. On an application level, our protocol supports fair user billing, i.e. each user pays only for his/her own energy consumption, and an open EV marketplace in which EV users can safely choose among different remote host suppliers.
Keywords: cryptographic protocols; data privacy; electric vehicles; power consumption; private key cryptography; secondary cells; anonymous multiuser protocol; charging protocol; charging sessions; contracted supplier; electric vehicle billing; electric vehicle charging; energy consumption; fair user billing; multiuser EV charging; private key protocol; remote host suppliers; secure roaming electric vehicle; two-factor authentication; user identity privacy; Conferences; Electricity; Privacy; Protocols; Public key; Smart grids (ID#: 15-6412)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007769&isnumber=7007609


Sathyasundary, P.; Chandrasekar, R., “Privacy and Integrity in Spatial Queries by Using Voronoi Neighbors,” Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, vol., no., pp. 1226, 1230, 8-10 May 2014. doi:10.1109/ICACCCT.2014.7019294
Abstract: Advances in network technologies and continuous growth of the Internet have triggered a new trend towards outsourcing data management need Service Providers. Outsourcing spatial database includes third party Service Providers which has attracted much attention from the individual and business data owners. With the popularity of mobile devices, it provides immediate and reliable location based information to smart phones. Hence the spatial information is delivered to real world users such as mobile users, therefore ensuring spatial integrity in OSDB becomes critical. To overcome this problem, Voronoi Neighbor Authentication (VN - Auth) technique is used, which utilizes the Voronoi diagram to prove the integrity of query result for the kNN query. Spatial query processing in spatial database attempts to extract specific geometric relations among spatial objects. The Service Provider verifies correctness and completeness of user query through which neighborhood information of Voronoi diagram. However the privacy of user location is not achieved in this paper. To avoid this drawback, it is proposed to apply Privacy aware LBS technique under the Voronoi Neighbor concept.
Keywords: computational geometry; data integrity; data privacy; mobile computing; outsourcing; query processing; smart phones; visual databases; Internet; OSDB; VN auth-technique; Voronoi diagram; Voronoi neighbor authentication technique; kNN query; mobile devices; neighborhood information; network technologies; outsourcing data management; outsourcing spatial database; privacy aware LBS technique; smart phones; spatial integrity; spatial queries; spatial query processing; Airports; Educational institutions; Mobile communication; Privacy; Rail transportation; Spatial databases; Authentication and Privacy; Location Anonymizer; Service Providers; Spatial Outsourcing Database; Voronoi Neighbor (ID#: 15-6413)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019294&isnumber=7019129


Miao He; Kuan Zhang; Shen, X.S., “PMQC: A Privacy-Preserving Multi-Quality Charging Scheme in V2G Network,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 675, 680, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036885
Abstract: Multi-quality charging, which provides the electric vehicles (EVs) with multiple levels of charging services, including quality-guaranteed service (QGS) and best effort service (BES), can guarantee the charging service quality for the qualified EVs in vehicle-to-grid (V2G) network. To perform the multi-quality charging, the evaluation on the EVs attributes is necessary to determine which level of charging service can be offered to this EV. However, the EV owner’s privacy such as real identity, lifestyle, location, and sensitive information in the attributes may be disclosed during the evaluation and authentication. In this paper, we propose a privacy-preserving multi-quality charging (PMQC) scheme in V2G network to evaluate the EVs attributes, authenticate its service eligibility and generate its bill without revealing the EVs private information. Specifically, we propose an evaluation mechanism on the EVs attributes to determine its charging service quality. With attribute based encryption, PMQC can prevent the EVs attributes from being disclosed to other entities during the evaluation. In addition, PMQC can authenticate the EV without revealing its real identity. Security analysis demonstrates that the EVs privacy mentioned above can be preserved by PMQC. Performance evaluation results show that PMQC can achieve higher efficiency in authentication compared with other schemes in terms of computation overhead.
Keywords: battery powered vehicles; computer network security; cryptography; data privacy; power engineering computing; secondary cells;PMQC;V2G network; attribute based encryption; best effort service; charging service quality; electric vehicle attribute; electric vehicle charging; multiquality charging; privacy preserving charging; quality guaranteed service; service eligibility authentication; vehicle-to-grid network; Authentication; Batteries; Electricity; Information systems; Privacy; Public key; Smart grid; V2G network; authentication; privacy-preservation (ID#: 15-6415)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036885&isnumber=7036769


Qin Zhang; Lazos, L., “Collusion-Resistant Query Anonymization for Location-Based Services,” Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 768, 774, 10-14 June 2014. doi:10.1109/ICC.2014.6883412
Abstract: We address the problem of anonymizing user queries when accessing location-based services. We design a novel location and query anonymization protocol called MAZE that preserves the user privacy without relying on trusted parties. MAZE guarantees the user’s anonymity and privacy in a decentralized manner using P2P groups. Compared to prior works, MAZE enables individual user authentication for the purpose of implementing a pay-peruse or membership subscription model and is resistant to collusion of the P2P users. We extend MAZE to L-MAZE, a multi-stage protocol that is resistant to collusion of the P2P users with the LBS, at the expense of higher communication overhead.
Keywords: data privacy; mobility management (mobile radio); peer-to-peer computing; protocols; query processing; MAZE; P2P groups; collusion resistant query anonymization; location based services; multistage protocol; query anonymization protocol; user privacy; user queries; Authentication; Cryptography; Information systems; Mobile radio mobility management; Peer-to-peer computing; Privacy; Protocols (ID#: 15-6416)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883412&isnumber=6883277


Tianyu Zhao; Chang Chen; Lingbo Wei; Mengke Yu, “An Anonymous Payment System to Protect the Privacy of Electric Vehicles,” Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on, vol., no., pp. 1, 6, 23-25 Oct. 2014. doi:10.1109/WCSP.2014.6992208
Abstract: Electric vehicle is the automobile that powered by electrical energy stored in batteries. Due to the frequent recharging, vehicles need to be connected to the recharging infrastructure while they are parked. This may disclose drivers’ privacy, such as their location that drivers may want to keep secret. In this paper, we propose a scheme to enhance the privacy of the drivers using anonymous credential technique and Trusted Platform Module (TPM). We use anonymous credential technique to achieve the anonymity of vehicles such that drivers can anonymously and unlinkably recharge their vehicles. We add some attributes to the credential such as the type of the battery in the vehicle in case that the prices of different batteries are different. We use TPM to omit a blacklist such that the company that offer the recharging service (Energy Provider Company, EPC) does not need to conduct a double spending detection.
Keywords: battery powered vehicles; cryptography; data privacy; driver information systems; financial management; secondary cells; trusted computing; EPC; Energy Provider Company; TPM; anonymous credential technique; anonymous payment system; automobile; battery; double spending detection; driver privacy; electric vehicles; electrical energy; privacy protection; recharging infrastructure; recharging service; trusted platform module; Authentication; Batteries; Privacy; Protocols; Registers; Servers; Vehicles (ID#: 15-6417)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992208&isnumber=6992003


Sadikin, M.F.; Kyas, M., “Security and Privacy Protocol for Emerging Smart RFID Applications,” Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2014 15th IEEE/ACIS International Conference on, vol., no., pp. 1, 7, June 30 2014–July 2 2014. doi:10.1109/SNPD.2014.6888694
Abstract: The raise of smart RFID technology (i.e. sensor integration to RFID system) has introduced various advantages in the context of location awareness applications, reaching from low cost implementation and maintenance, to its flexibility to support large-scale system. Nevertheless, the use of such technology introduces tremendous security and privacy issues (e.g. unauthorized tracking, information leakage, cloning attack, data manipulation, collision attack, replay attack, Denial-of-Service, etc.). On the other hand, the constrained nature of RFID application makes the security enforcement is more complicated. This paper presents IMAKA-Tate: Identity protection, Mutual Authentication and Key Agreement using Tate pairing of Identity-based Encryption method. It is designed to tackle various challenges in the constrained nature of RFID applications by applying a light-weight cryptographic method with advanced-level 128 bit security protection. Indeed, our proposed solution protects the RFID system from various threats, as well as preserves the privacy by early performing encryption including the identity even before the authentication is started.
Keywords: data privacy; protocols; radiofrequency identification; telecommunication security; Denial-of-Service; RFID system; cloning attack; collision attack; data manipulation; identity based encryption method; identity protection; information leakage; key agreement; large-scale system; lightweight cryptographic method; location awareness applications; mutual authentication; privacy protocol; replay attack; security protection; security protocol; sensor integration; smart RFID applications; unauthorized tracking; Authentication; Cryptography;Payloads; Privacy; Protocols; Radiofrequency identification; Mutual Authentication; Privacy Preserving; Smart RFID Security (ID#: 15-6418)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888694&isnumber=6888665


Raheem, A.; Lasebae, A.; Loo, J., “A Secure Authentication Protocol for IP-Based Wireless Sensor Communications Using the Location/ID Split Protocol (LISP),” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 840, 845, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.135
Abstract: The future of the Internet of Things (IoT) involves a huge number of node devices such as wireless sensors that can communicate in a machine-to-machine pattern, where devices will be globally addressed and identified. As the number of connected devices increased, the burden on the network infrastructure and the size of the routing tables and the efficiency of the current routing protocols in the Internet backbone increased as well. Recently, an IETF working group, along with the research group at Cisco, are working on a Locator/ID Separation Protocol as a routing architecture that provides new semantics for IP addressing, in order to simplify routing operations and improve scalability in the future of the Internet such as the IoT. In the light of the previous issue, this paper proposes an efficient security authentication and a key exchange scheme that is suited for Internet of things based on Locator/ID Separation protocol. The proposed protocol method meets practicability, simplicity, and strong notions of security. The protocol is verified using Automated Validation Internet Security Protocols and Applications (AVISPA) which is a push button tool for the automated validation of security protocols and the achieved results showed that they do not have any security flaws.
Keywords: Internet; cryptographic protocols; routing protocols; transport protocols; AVISPA; IP addressing; Internet backbone; Internet of Things; IoT; LISP; automated validation Internet security protocols and applications; key exchange scheme; location-ID split protocol; locator-ID separation protocol; machine-to-machine pattern; network infrastructure burden; push button tool; routing protocols; routing tables; security authentication; wireless sensors; Authentication; Internet; Peer-to-peer computing; Routing protocols; Wireless sensor networks; Internet of Things; Sensors; LISP; Validation of Internet (ID#: 15-6419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011335&isnumber=7011202
 

Rugamer, A.; Stahl, M.; Lukcin, I.; Rohmer, G., “Privacy Protected Localization and Authentication of Georeferenced Measurements Using Galileo PRS,” Position, Location and Navigation Symposium – PLANS 2014, 2014 IEEE/ION, vol., no.,
pp. 478, 486, 5-8 May 2014. doi:10.1109/PLANS.2014.6851406
Abstract: This paper describes two methods how ordinary users can profit from privacy protected localization and geo-referenced measurements authentication using the Galileo public regulated service (PRS). The user does not need to care about any security related PRS-receiver issue and his localization privacy is inherently protected. A raw data snapshot, containing only Galileo PRS data, is combined with an artifact to be authenticated and forwarded to a PRS enabled agency server. All PRS and security related functions are implemented on this server located in a secured place. The server uses cross-correlation and snapshot positioning methods to authenticate or obtain a position information out of the raw data. The described methods will not provide any direct PRS information, like PRS position or time, to the ordinary user. Only the specific user request is responded. Having outlined the architecture of possible implementations, limits and applications of the idea are discussed. Possible attacks on the methods are described with mitigation measures. The paper concludes with a comparison to the state of the art and other publications and projects in this field of GNSS authentication.
Keywords: Global Positioning System; data privacy; telecommunication security; GNSS authentication; Galileo PRS data; Galileo public regulated service; PRS enabled agency server; cross-correlation; georeferenced measurement authentication; privacy protected localization; raw data snapshot; security related PRS-receiver issue; snapshot positioning methods; Authentication; Europe; Privacy; Servers; Time measurement; PRS; Satellite navigation systems; Snapshot positioning (ID#: 15-6420)
URL:  http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6851406&isnumber=6851348
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Location Privacy—Cloaking-Based Approaches, 2014

 

 
SoS Logo

Location Privacy—Cloaking-Based Approaches

2014


Location-based services have proven popular both with end users and with distributed systems operators. The research presented here looks at protecting privacy on these systems using cloaking-based methods. The work was published in 2014.



Zheng Jiangyu; Tan Xiaobin; Cliff, Z.; Niu Yukun; Zhu Jin, “A Cloaking-Based Approach to Protect Location Privacy in Location-Based Services,” Control Conference (CCC), 2014 33rd Chinese, vol., no., pp. 5459, 5464, 28-30 July 2014. doi:10.1109/ChiCC.2014.6895872
Abstract: With the widespread use of mobile devices, the location-based service (LBS) applications become increasingly popular, which introduces the new security challenge to protect user’s location privacy. On one hand, a user expects to report his own location as far as possible away from his real location to protect his location privacy. On the other hand, in order to obtain high quality of service (QoS), users are required to report their locations as accurate as possible. To achieve the dedicated tradeoff between privacy requirement and QoS requirement, we propose a novel approach based on cloaking technique. We also discuss the disadvantage of the traditional general system model and propose an improved model. The basic idea of our approach is to select a sub-area from the generated cloaking area as user’s reported location. The sub-area may not contain a user’s real location, which prevents an adversary from performing attack with side information. Specifically, by defining an objective function with a novel location privacy metric and a QoS metric, we are able to convert the privacy issue to an optimization problem. Then, location privacy metric and QoS metric are given. To reduce the complexity of the optimization, a heuristic algorithm is proposed. Through privacy-preserving analysis and comparison with related work [8], we demonstrate the effectiveness and efficiency of our approach.
Keywords: data protection; invisibility cloaks; mobility management (mobile radio); optimisation; quality of service; smart phones; telecommunication security; QoS metric; cloaking-based approach; heuristic algorithm; location privacy metric; location-based services; mobile devices; optimization problem; privacy preserving analysis; privacy requirement; security; user location privacy protection; Complexity theory; Heuristic algorithms; Measurement; Optimization; Privacy; Quality of service; Servers; Cloaking Area; Location Privacy; Location-based Services; k-anonymity (ID#: 15-6420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895872&isnumber=6895198


Jagdale, B.N.; Bakal, J.W., “Emerging Trends of Location Privacy Techniques in Location Aided Applications,” Contemporary Computing and Informatics (IC3I), 2014 International Conference on, vol., no., pp. 1002, 1006, 27-29 Nov. 2014. doi:10.1109/IC3I.2014.7019826
Abstract: While mobile services are maturing in the Mobile Computing world, location services have started mushrooming in every day work which are making good impact for humans as well as all moving resources. With this development, serious threat of location privacy has emerged and becoming inevitable part of this kind of services. Present systems do not have concrete answer of location privacy because of absence of robust technology, poor governance and business interest. Modern ways need to be practiced, including Cryptography, Collaborative, Distributed and Internationally legal governance protocols. There are dozens of cloaking methods such as dummy users, K-Anonymity, false location queries, dummy queries, cryptography protocols, etc., however no commercial LBS systems guarantees the location privacy. In this paper, we have studied drawbacks of existing techniques. We have proposed either modification or novel methods to protect location privacy. Moreover we have suggested to analyse privacy strength, energy consumption, and accuracy of services, overhead cost such as computing and communication cost.
Keywords: cryptographic protocols; data protection; mobile computing; LBS systems; cloaking methods; collaborative protocol; communication cost analysis; computing cost analysis; cryptography protocol; distributed protocol; energy consumption, analyse; internationally legal governance protocol; location aided applications; location privacy protection; location privacy techniques; location privacy threat; location services; mobile services; overhead cost analysis; privacy strength analysis; service accuracy analysis; Computer architecture; Data privacy; Mobile communication; Mobile computing; Privacy; Protocols; Servers; Location Privacy; Security; distributed (ID#: 15-6421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019826&isnumber=7019573


Xun Yi; Paulet, R.; Bertino, E.; Varadharajan, V., “Practical k Nearest Neighbor Queries with Location Privacy,” Data Engineering (ICDE), 2014 IEEE 30th International Conference on, vol., no., pp. 640, 651, March 31 2014–April 4 2014. doi:10.1109/ICDE.2014.6816688
Abstract: In mobile communication, spatial queries pose a serious threat to user location privacy because the location of a query may reveal sensitive information about the mobile user. In this paper, we study k nearest neighbor (kNN) queries where the mobile user queries the location-based service (LBS) provider about k nearest points of interest (POIs) on the basis of his current location. We propose a solution for the mobile user to preserve his location privacy in kNN queries. The proposed solution is built on the Paillier public-key cryptosystem and can provide both location privacy and data privacy. In particular, our solution allows the mobile user to retrieve one type of POIs, for example, k nearest car parks, without revealing to the LBS provider what type of points is retrieved. For a cloaking region with n×n cells and m types of points, the total communication complexity for the mobile user to retrieve a type of k nearest POIs is O(n+m) while the computation complexities of the mobile user and the LBS provider are O(n + m) and O(n2m), respectively. Compared with existing solutions for kNN queries with location privacy, our solutions are more efficient. Experiments have shown that our solutions are practical for kNN queries.
Keywords: communication complexity; data privacy; mobility management (mobile radio); pattern recognition; public key cryptography; query processing; LBS querying; Paillier public-key cryptosystem; cloaking region; computation complexities; data privacy; k nearest POIs retrieval; k nearest car parks; k nearest points of interest; kNN queries; location privacy preservation; location-based service provider querying; mobile communication; mobile user; practical k nearest neighbor queries; spatial queries; total communication complexity; user location privacy; Data privacy; Databases; Games; Middleware; Mobile communication; Privacy; Protocols (ID#: 15-6422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816688&isnumber=6816620


Bidi Ying; Makrakis, D., “Protecting Location Privacy with Clustering Anonymization in Vehicular Networks,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 305, 310, April 27 2014–May 2 2014. doi:10.1109/INFCOMW.2014.6849249
Abstract: Location privacy is an important issue in location-based services. A large number of location cloaking algorithms have been proposed for protecting location privacy of users. However, these algorithms cannot be used in vehicular networks due to constrained vehicular mobility. In this paper, we propose a new method named Protecting Location Privacy with Clustering Anonymization (PLPCA) for location-based services in vehicular networks. This PLPCA algorithm starts with a road network transforming to an edge-cluster graph in order to conceal road information and traffic information, and then provides a cloaking algorithm based on A-anonymity and l-diversity as privacy metrics to further enclose a target vehicle's location. Simulation analysis shows our PLPCA has good performances like the strength of hiding of road information & traffic information.
Keywords: data privacy; graph theory; mobility management (mobile radio); pattern clustering; telecommunication security; vehicular ad hoc networks; PLPCA algorithm; edge-cluster graph; k-anonymity; l-diversity; location based service; location cloaking algorithm; protecting location privacy with clustering anonymization; road information hiding; road network transforming; traffic information hiding; vehicular ad hoc network; vehicular mobility; Clustering algorithms; Conferences; Privacy; Roads; Social network services; Vehicle dynamics; Vehicles; cluster; location privacy; location-based services; vehicular networks (ID#: 15-6423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849249&isnumber=6849127


Jagdale, B.N.; Bakal, J.W., “Synergetic Cloaking Technique in Wireless Network for Location Privacy,” Industrial and Information Systems (ICIIS), 2014 9th International Conference on, vol., no., pp. 1, 6, 15-17 Dec. 2014. doi:10.1109/ICIINFS.2014.7036480
Abstract: Mobile users access location services from a location based server. While doing so, the user’s privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user’s identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.
Keywords: data privacy; mobile ad hoc networks; mobility management (mobile radio); probability; telecommunication security; telecommunication services; ad-hoc networks; cloaking area sizes; location based server; location privacy; location service provider; location-based services; mobile devices; mobile network; mobile users; query originator; query sender; synergetic cloaking technique; user privacy; wireless network; Ad hoc networks; Cryptography; Databases; Educational institutions; Mobile communication; Privacy; Servers; Cloaking; Collaboration; Location Privacy; Mobile Networks; Performance (ID#: 15-6424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036480&isnumber=7036459


Yujia Zhu; Lidong Zhai, “Location Privacy in Buildings: A 3-Dimensional K-Anonymity Model,” Mobile Ad-hoc and Sensor Networks (MSN), 2014 10th International Conference on, vol., no., pp. 195, 200, 19-21 Dec. 2014. doi:10.1109/MSN.2014.33
Abstract: Privacy protection has recently received considerable attention in location-based services. In this paper, we show that most of the existing k-anonymity location cloaking algorithms are concerned only and cannot effectively prevent location-dependent attacks when users’ locations have height information. Therefore, adopting the three dimensional location information, we propose a new clique-based cloaking algorithm, called 3d Clique Cloak, to defend against location leaks in indoor environment. The main idea is to expand the MBV (minimum bounding volume) to a three-dimensional space, thus for a user who initiated location services can find k-anonymity cloaking set in the three-dimensional space. The efficiency and effectiveness of the proposed 3d Clique Cloak algorithm are validated by series of carefully designed experiments.
Keywords: data privacy; indoor radio; mobile computing; solid modelling; telecommunication computing; telecommunication security; 3D K-anonymity model; 3D location information; 3D space; 3d CliqueCloak algorithm; MBV; clique-based cloaking algorithm; indoor environment; k-anonymity location cloaking algorithms; location privacy protection; location-based services; minimum bounding volume; Engines; Floors; Layout; Measurement; Mobile communication; Privacy; indoor localization; k-anonymity; location privacy; location-based services (ID#: 15-6425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7051770&isnumber=7051734


Corser, G.; Huirong Fu; Tao Shu; D’Errico, P.; Ma, W.; Supeng Leng; Ye Zhu, “Privacy-by-Decoy: Protecting Location Privacy Against Collusion and Deanonymization in Vehicular Location Based Services,” Intelligent Vehicles Symposium Proceedings, 2014 IEEE, vol., no., pp. 1030, 1036, 8-11 June 2014. doi:10.1109/IVS.2014.6856595
Abstract: Wireless networks which would connect vehicles via the Internet to a location based service, LBS, also would expose vehicles to online surveillance. In circumstances when spatial cloaking is not effective, such as when continuous precise location is required, LBSs may be designed so that users relay dummy queries through other vehicles to camouflage true locations. This paper introduces PARROTS, Position Altered Requests Relayed Over Time and Space, a privacy protocol which protects LBS users’ location information from LBS administrators even (1) when the LBS requires continuous precise location data in a vehicular ad hoc network, (2) when LBS administrators collude with administrators of vehicular wireless access points (a.k.a. roadside units, or RSUs), and (3) when precise location data might be deanonymized using map databases linking vehicle positions with vehicle owners’ home/work addresses and geographic coordinates. Defense against deanonymization requires concealment of endpoints, the effectiveness of which depends on the density of LBS users and the endpoint protection zone size. Simulations using realistic vehicle traffic mobility models varying endpoint protection zone sizes measure improvements in privacy protection.
Keywords: data privacy; radio access networks; telecommunication security; vehicular ad hoc networks; Internet; LBSs; PARROTS; VANET; dummy queries; endpoint concealment; endpoint protection zone size; location data deanonymization; location privacy protection; map databases; online surveillance; position altered requests relayed over time and space; privacy protocol; privacy-by-decoy; vehicle traffic mobility models; vehicular ad hoc network; vehicular location based services; vehicular wireless access points; wireless networks; Computational modeling; Cryptography; Mathematical model; Measurement; Privacy; Surveillance; Vehicles (ID#: 15-6426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6856595&isnumber=6856370


Jagdale, B.N.; Bakal, J.W., “Myself: Local Perturbation for Location Privacy in LBS Applications,” Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, vol., no., pp. 1981, 1985, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968641
Abstract: The location security in current location-based services (LBS) meets threat where mobile users have to report their actual location knowledge to the LBS provider in order to get their desired POI, (Points of Interests). We consider location privacy techniques that work using obfuscation operators and provide different information services using different cloaking techniques without any trusted components other than the client’s mobile device. The techniques are then covered according to the random category. It blurs the accurate user location (i.e., a point with coordinates) and replaces it with a well-shaped cloaked region (e.g. Circle, Rectangle, Pentagon etc.). We have recommended the technique where instead of exchanging cloaking data with peers, user queries directly to LBS. We have presented techniques where first technique which provides different privacy levels using obfuscation operators. The second technique for query processing generates the region of different shapes. Third demonstrates regional cloaking and two more new ideas are presented. We have shown effectiveness and performance of these techniques.
Keywords: data privacy; mobile computing; query processing; LBS applications; POI; cloaking techniques; information services; local perturbation; location privacy; location security; location-based services; mobile users; obfuscation operators; query processing; random category; regional cloaking; Cities and towns; Clocks; Context; Europe; Mobile communication; Robustness; Shape; Cloaking; Location Privacy; Location-Based Services; Security (ID#: 15-6427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968641&isnumber=6968191


Xu Zhang; Gyoung-Bae Kim; Hae-Young Bae, “An Adaptive Spatial Cloaking Method for Privacy Protection in Location-Based Service,” Information and Communication Technology Convergence (ICTC), 2014 International Conference on, vol., no., pp. 480, 485, 22-24 Oct. 2014. doi:10.1109/ICTC.2014.6983186
Abstract: Location privacy has been a serious concern for mobile users who use location-based services. However, existing cloaking methods suffer from computation and communication cost due to the large cloaking area. In this paper, we propose an adaptive spatial cloaking method based on the semantic locations to protect users’ privacy. The cloaking region is generated in an asymmetric way and can obtain a reasonable cloaking size. The performance exhibits that our proposed method renders good performance in efficiency and scalability by improving computation and communication overhead.
Keywords: data protection; invisibility cloaks; mobile radio; adaptive spatial cloaking method; communication cost; communication overhead; computation cost; location privacy; location-based service; user privacy protection; Computer architecture; Indexes; Mobile radio mobility management; Privacy; Semantics; Servers; Trajectory; Location Privacy; Location-based Service; Spatial Cloaking (ID#: 15-6428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983186&isnumber=6983064


Dhawale, S.P.; Raut, A.R., “Analysis of Location Monitoring Techniques with Privacy Preservation in WSN,” Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, vol., no., pp. 649, 653, 7-9 April 2014. doi:10.1109/CSNT.2014.136
Abstract: The trend of location surveillance system is increasing day-by-day, so the range of such services providing systems like GPS and PDA’s conjointly increase parallel. As a result of this, we tend to get the precise and applicable location of monitoring object. However at an equivalent time the foremost necessary issue of privacy is missing. WSN chiefly consists of spatially distributed autonomous sensors that monitor physical or environmental conditions and cooperatively pass their data through the wireless network to a main location in WSN. The main challenges in wireless sensor network are heterogeneousness, distributed processing, low bandwidth communication, large scale coordination and secured location monitoring. There is a variety of applications that are developed on the basis of wireless sensor network, such as navigation, habitat monitoring, objects detection and tracking. Location monitoring systems are used to detect human activities and provide monitoring services with low privacy. This paper gives comparative analysis of the location monitoring and privacy providing schemes. There are some problems in previous papers that are: wrong location providing, provide precise location except privacy, providing privacy as well location except the data used is static data. The method proposed in this paper is more reliable to overcome such problems.
Keywords: Global Positioning System; data privacy; surveillance; telecommunication security; wireless sensor networks; GPS; PDA; WSN; distributed processing; environmental condition; habitat monitoring; human activity detection; location surveillance system; low bandwidth communication; navigation; object detection; object tracking; privacy preservation; secured location monitoring technique; spatially distributed autonomous sensor; wireless sensor network; Data privacy; Monitoring; Peer-to-peer computing; Privacy; Sensors; Servers; Wireless sensor networks; Anonymization; Wireless sensor network; cloaking; location monitoring; privacy preserving (ID#: 15-6429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821478&isnumber=6821334


Niu, B.; Qinghua Li; Xiaoyan Zhu; Guohong Cao; Hui Li, “Achieving K-Anonymity in Privacy-Aware Location-Based Services,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 754, 762, April 27 2014-May 2 2014. doi:10.1109/INFOCOM.2014.6848002
Abstract: Location-Based Service (LBS) has become a vital part of our daily life. While enjoying the convenience provided by LBS, users may lose privacy since the untrusted LBS server has all the information about users in LBS and it may track them in various ways or release their personal data to third parties. To address the privacy issue, we propose a Dummy-Location Selection (DLS) algorithm to achieve k-anonymity for users in LBS. Different from existing approaches, the DLS algorithm carefully selects dummy locations considering that side information may be exploited by adversaries. We first choose these dummy locations based on the entropy metric, and then propose an enhanced-DLS algorithm, to make sure that the selected dummy locations are spread as far as possible. Evaluation results show that the proposed DLS algorithm can significantly improve the privacy level in terms of entropy. The enhanced-DLS algorithm can enlarge the cloaking region while keeping similar privacy level as the DLS algorithm.
Keywords: data privacy; mobile computing; DLS algorithm; cloaking region; dummy-location selection algorithm; entropy metric; k-anonymity; privacy-aware location-based services; untrusted LBS server; user information; Algorithm design and analysis; Computers; Conferences; Entropy; Measurement; Privacy; Servers (ID#: 15-6430)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848002&isnumber=6847911


Ben Niu; Zhengyan Zhang; Xiaoqing Li; Hui Li, “Privacy-Area Aware Dummy Generation Algorithms for Location-Based Services,” Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 957, 962, 10-14 June 2014. doi:10.1109/ICC.2014.6883443
Abstract: Location-Based Services (LBSs) have been one of the most popular activities in our daily life. Users can send queries to the LBS server easily to learn their surroundings. However, these location-related queries may result in serious privacy concerns since the un-trusted LBS server has all the information about users and may track them in various ways. In this paper, we propose two dummy-based solutions to achieve k-anonymity for privacy-area aware users in LBSs with considering that side information may be exploited by adversaries. We first choose some candidates based on a virtual circle or grid method, then blur these candidates into the final positions of dummy locations based on the entropy-based privacy metric. Security analysis and evaluation results indicate that the V-circle solution can significantly improve the privacy anonymity level. The V-grid solution can further enlarge the cloaking region while keeping similar privacy level.
Keywords: data privacy; query processing; ubiquitous computing; V-circle solution; cloaking region; entropy-based privacy metric; k-anonymity; location-based services; location-related queries; privacy-area aware dummy generation algorithm; privacy-area aware users; security analysis; Entropy; Information systems; Mobile communication; Privacy; Resistance; Security; Servers (ID#: 15-6431)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883443&isnumber=6883277


Ben Niu; Qinghua Li; Xiaoyan Zhu; Hui Li, “A Fine-Grained Spatial Cloaking Scheme for Privacy-Aware Users in Location-Based Services,” Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, vol., no., pp. 1, 8, 4-7 Aug. 2014. doi:10.1109/ICCCN.2014.6911813
Abstract: In Location-Based Services (LBSs) mobile users submit location-related queries to the untrusted LBS server to get service. However, such queries increasingly induce privacy concerns from mobile users. To address this problem, we propose FGcloak, a novel fine-grained spatial cloaking scheme for privacy-aware mobile users in LBSs. Based on a novel use of modified Hilbert Curve in a particular area, our scheme effectively guarantees k-anonymity and at the same time provides larger cloaking region. It also uses a parameter σ for users to make fine-grained control on the system overhead based on the resource constraints of mobile devices. Security analysis and empirical evaluation results verify the effectiveness and efficiency of our scheme.
Keywords: data privacy; mobile computing; FGcloak; Hilbert Curve; LBS; fine grained spatial cloaking scheme; fine-grained control; location based services; novel fine-grained spatial cloaking scheme; privacy aware mobile users; privacy aware users; security analysis; Algorithm design and analysis; Control systems; Mobile communication; Privacy; Security; Servers; Standards (ID#: 15-6432)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911813&isnumber=6911704


Sato, F., “User Location Anonymization Based on Secret Computation,” Broadband and Wireless Computing, Communication and Applications (BWCCA), 2014 Ninth International Conference on, vol., no., pp. 410, 415, 8-10 Nov. 2014. doi:10.1109/BWCCA.2014.96
Abstract: Recently, highly accurate positioning devices enable us to provide various types of location based services (LBS). Since location information may reveal private information, preserving location privacy has become a significant issue in LBS. Lots of different techniques for securing the location privacy have been proposed, for instance the concept of Silent period, the concept of Dummy node, and the concept of Cloaking-region. However, many of these were not focused on information leakage on the servers. In this paper, we propose a user location management method based on the secure computation algorithm which protects information leakage from the location management servers. This algorithm is based on the multi-party computation and the computation complexity is not so high. We evaluated the proposed scheme in comparison with the method based on the homomorphic cryptographic method.
Keywords: cryptography; data privacy; cloaking-region; computation complexity; dummy node; homomorphic cryptographic method; location based services; location privacy; secure computation algorithm; silent period; user location anonymization; user location management method; Encryption; Mobile communication; Privacy; Quality of service; Registers; Servers; location anonymization; location based services; location privacy (ID#: 15-6433)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016106&isnumber=7015998


Yong Wang; Long-ping He; Jing Peng; Jie Hou; Yun Xia, “A Context-Dependent Privacy Preserving Framework In Road Networks,” Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 628, 633, 10-14 June 2014. doi:10.1109/ICC.2014.6883389
Abstract: The prevalence of Location Based Services (LBSs) increases personal privacy concerns due to the untrustworthy service providers. We demonstrate a context-dependent privacy preserving framework for users whose movements are confined by the underlying road networks. Both the location privacy and query privacy in continuous queries are preserved as they are closely related. For continuous query services, different positions on a user’s trajectory may have different privacy sensitivities. In addition, privacy is about users’ feelings and varies among them. Hence, a Policy Service (PS) is introduced to generate context-dependent privacy strategies according to user-defined privacy profiles. Meanwhile, a semi-honest Anonymizing Service (AS) is employed to generate prediction-based cloaks with history information for users while satisfying their privacy strategies. The PS and AS interact with each other in the way to ensure neither of them can obtain both the location information and the query contents. The simulated results show the effectiveness of our framework in the view of privacy preserving and system performance.
Keywords: cloud computing; data privacy; query processing; LBS; context-dependent privacy preserving framework; continuous query services; history information; location based services; location privacy; policy service; prediction-based cloaks; privacy sensitivity; query privacy; semihonest anonymizing service; underlying road networks; user trajectory; user-defined privacy profiles; Privacy; Quality of service; Resistance; Roads; Security; Sensitivity; Trajectory; Location Based Services (LBSs); continuous query service; privacy preserving; road networks (ID#: 15-6434)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883389&isnumber=6883277


Pandit, A.; Polina, P.; Kumar, A.; Bin Xie, “CAPPA: Context Aware Privacy Protecting Advertising—An Extension to CLOPRO Framework,” Services Computing (SCC), 2014 IEEE International Conference on, vol., no., pp. 805, 812, June 27 2014–July 2 2014. doi:10.1109/SCC.2014.109
Abstract: Advent of 4G networks, IPV6 and increased number of subscribers to these, has triggered many free applications that are easy to install on smart mobile devices, a primary computing device for many. The free application markets are sustainable as revenue model for most of these service providers is through profiling of users and pushing of the advertisements to the users. This imposes a serious threat to user’s privacy. Most of the existing solutions starve the developers of their revenue by falsifying / altering the information of the users. In this paper, we attempt to bridge this gap by extending our integrated Context Cloaking Privacy Protection framework (CLOPRO) that achieves identity privacy, location privacy, and query privacy without depriving the service provider of sustainable revenue generated through the use of the Context Aware Privacy Preserving Advertising (CAPPA). The CLOPRO framework has been shown to provide privacy to the user while using location based services. In this paper we demonstrate how this framework can be extended to deliver the advertisements / coupons based on users interests, specified at the time of registration, and the current context of the user without revealing these details to the service provider. The original service requests of the registered users are modified by the CLOPRO framework using concepts of clustering and abstraction. The results are filtered to deliver the relevant information to the user. Since the advertisements received are relevant to the user, the click rate is likely to increase ensuring increased revenue for service provider. The proposed framework has O(n) complexity.
Keywords: advertising data processing; data privacy; information services; mobile computing; pattern clustering;4G networks; CAPPA framework; CLOPRO framework; IPV6;Internet protocol; O(n) complexity; abstraction concept; clustering concept; context aware privacy protecting advertising; context cloaking privacy protection framework; fourth-generation networks; free application markets; identity privacy; location based services; location privacy; query privacy; service provider; smart mobile devices; user privacy; Advertising; Context; Context-aware services; Mobile communication; Mobile handsets; Privacy; Servers; Abstraction; Anonymization; Clustering; Context Aware Advertising; Context Cloaking; Location Based Services; Privacy Protection
(ID#: 15-6435)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930611&isnumber=6930500


Chunhui Piao; Xiaoyan Li; Xiao Pan, “Research on the User Privacy Protection Method in Mobile Commerce,” e-Business Engineering (ICEBE), 2014 IEEE 11th International Conference on, vol., no., pp. 177, 184, 5-7 Nov. 2014. doi:10.1109/ICEBE.2014.39
Abstract: The wide application of mobile commerce has brought great convenience to people’s work and lives, however, the risk of privacy disclosure has been receiving more and more attention from the academia and industry. In this paper, after analyzing the privacy concerns in mobile commerce, the commonly used privacy preserving technologies in mobile environments are discussed. A privacy preserving operation model for the mobile commerce alliance providing location-based services is established. Aiming at preventing the sensitive homogeneity attack, an anonymity model for sensitive information is defined formally. Based on the anonymity model, a new cloaking algorithm named as EMDASS is described in detail, whose basic idea is exchanging and merging users. This algorithm can be used to protect the mobile user’s location, identifier and other sensitive information on road networks. Finally, the availability of the privacy preserving algorithm proposed is illustrated by an example.
Keywords: data privacy; mobile commerce; risk management; EMDASS; anonymity model; cloaking algorithm; location-based services; mobile commerce alliance; mobile environments; mobile user location protection; privacy disclosure risk; privacy preserving algorithm; privacy preserving operation model; privacy preserving technologies; road networks; sensitive homogeneity attack; sensitive information; user privacy protection method; Business; Mobile computing; Mobile radio mobility management; Privacy; Roads; Sensitivity; Mobile commerce; Mobile commerce alliance; Privacy preserving algorithm; Sensitive information
(ID#: 15-6436)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982077&isnumber=6982037


Pournajaf, L.; Li Xiong; Sunderam, V.; Goryczka, S., “Spatial Task Assignment for Crowd Sensing with Cloaked Locations,” Mobile Data Management (MDM), 2014 IEEE 15th International Conference on, vol. 1, no., vol., no., pp. 73, 82, 14-18 July 2014. doi:10.1109/MDM.2014.15
Abstract: Distributed mobile crowd sensing is becoming a valuable paradigm, enabling a variety of novel applications built on mobile networks and smart devices. However, this trend brings several challenges, including the need for crowd sourcing platforms to manage interactions between applications and the crowd (participants or workers). One of the key functions of such platforms is spatial task assignment which assigns sensing tasks to participants based on their locations. Task assignment becomes critical when participants are hesitant to share their locations due to privacy concerns. In this paper, we examine the problem of spatial task assignment in crowd sensing when participants utilize spatial cloaking to obfuscate their locations. We investigate methods for assigning sensing tasks to participants, efficiently managing location uncertainty and resource constraints. We propose a novel two-stage optimization approach which consists of global optimization using cloaked locations followed by a local optimization using participants’ precise locations without breaching privacy. Experimental results using both synthetic and real data show that our methods achieve high sensing coverage with low cost using cloaked locations.
Keywords: computational complexity; distributed sensors; mobile computing; optimisation; NP-hard problem; cloaked locations; crowd sourcing platforms; distributed mobile crowd sensing; global optimization approach; local optimization approach; location uncertainty management; mobile networks; privacy concerns; resource constraints; smart devices; spatial cloaking; spatial task assignment; two-stage optimization approach; Estimation; Mobile communication; Optimization; Privacy; Sensors; Servers; Uncertainty (ID#: 15-6437)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916906&isnumber=6916883


Sangeetha, S.; Dhanabal, S.; Kaliappan, V.K., “Optimization of K-NN Query Processing in Road Networks Using Frequent Query Retrieval Table,” Computing and Communication Technologies (WCCCT), 2014 World Congress on, vol., no., pp. 228, 230, Feb. 27 2014–March 1 2014. doi:10.1109/WCCCT.2014.22
Abstract: Location Based Services has been widely used to guide the user with real time information. The efficient query processing and preserving the privacy of the user is a key challenge in these applications. There have been many researches for anonymity in the spatial network by generating cloaking region in the Road networks and a K-NN algorithm is used for processing query in this region. If the same query is issued frequently and processing these queries continuously is an issue. In this paper, a novel Frequent Query Retrieval Table (FQRT) is proposed to increase the efficiency of query processing in the K-NN algorithm. FQRT maintains the results of the frequently occurring queries and can be retrieved when the same query is issued in the cloaking region. The proposed FQRT algorithm reduces the query processing time and the network expansion cost.
Keywords: data privacy; mobile computing; optimisation; query processing; FQRT; K-NN query processing optimization; cloaking region; frequent query retrieval table; location based services; network expansion cost; road networks; spatial network; user guidance; user privacy preservation; Algorithm design and analysis; Mobile communication; Privacy; Query processing; Roads; Servers; Time factors; Query Processing; Road Networks; k-NN Queries (ID#: 15-6438)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755146&isnumber=6755083
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
 


MANET Security and Privacy, 2014

 

 
SoS Logo

MANET Security and Privacy

2014


Security and privacy are important research issues for mobile ad hoc networks (MANETs). The studies cited here were conducted and presented in 2014 and were recovered on June 24, 2015.



Srihari Babu, D.V.; Reddy, P.C., “Secure Policy Agreement for Privacy Routing in Wireless Communication System,” Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, vol., no., pp. 739, 744, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6993057
Abstract: Security and privacy are major issues which risk the wireless communication system in successful operation employment in Adhoc and Sensor networks. Message confidentiality can be assured through successful message or content encryption, but it is very difficult to address the source location privacy. A number of schemes and polices have been proposed to protect privacy in wireless networks. Many security schemes are offered but none of those provide complete security property for data packets and control packets. This paper proposes a secure policy agreement approach for open-privacy routing in wireless communication using location-centric communication model to achieve efficient security and privacy against both Internal and External adversary pretenders. To evaluate the performance of our proposal we analyze the security, privacy and performance comparisons to alternate techniques. Simulation result shows an improvisation in proposed policy and it is more efficient and offers better privacy when compare to the prior works.
Keywords: ad hoc networks; cryptography; data privacy; telecommunication network routing; wireless channels; wireless sensor networks; complete security property; content encryption; control packets; data packets; external adversary pretenders; internal adversary pretenders; location-centric communication; message confidentiality; message encryption; open-privacy routing; secure policy agreement; sensor networks; source location privacy; successful operation employment; wireless communication system; Mobile ad hoc networks; Privacy; Public key; Routing; Routing protocols; MANET; Privacy Routing; Secure policy; Wireless Communication (ID#: 15-6181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993057&isnumber=6992918


Khatkar, M.; Phogat, N.; Kumar, B., “Reliable Data Transmission in Anonymous Location Aided Routing in MANET by Preventing Replay Attack,” Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on, vol., no., pp. 1, 6, 8-10 Oct. 2014. doi:10.1109/ICRITO.2014.7014731
Abstract: Privacy and security are major issues in MANET, especially when used in sensitive areas. Secure routing protocols have been developed/proposed by researchers to provide security and privacy at various levels. ALARM protocol (Anonymous Location Aided Routing in MANET) provides both privacy and security features including confidentiality, authentication and authorization. Location based routing is based on some assumptions in MANET ie location of the mobile nodes (using GPS), Time Clock of mobile nodes are loosely synchronized, mobility and Nodes has uniform transmission range. In the current work an effort has been done to review the ALARM protocol and identify some of the security problems in MANET. Further the work suggests a mechanism to prevent malicious activity (replay attack) in MANET using monitoring method.
Keywords: data privacy; mobile ad hoc networks; routing protocols; synchronisation; telecommunication network reliability; telecommunication security; ALARM protocol; GPS; MANET; anonymous location aided routing protocol; data transmission reliability; malicious activity prevention; privacy feature; replay attack prevention; security feature; time clock synchronization; Authentication; Mobile ad hoc networks; Monitoring; Protocols; Routing; Synchronization; Alarm Protocol; MANET; Monitoring; Prevention; Replay attack (ID#: 15-6182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014731&isnumber=7014644


Devi, E.A.; Chitra, K., “Security Based Energy Efficient Routing Protocol for Ad Hoc Network,” Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, vol., no., pp. 1522, 1526, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6992982
Abstract: AdHoc network plays an important role for critical scenario such as military services, law enforcement as well as in emergency rescue operation. In such type of request, it requires security and privacy for the underlying routing protocol. As it is a communications less and source limit network, it is very important to propose secure based energy efficient routing protocol. In order to provide a secure and energy efficient routing protocol, a Privacy Protecting Secure and Energy Efficient Routing Protocol (PPSEER) is proposed. In this protocol, first the classifications of network node take place based on their energy level. After that encryption is done based on group signature. It includes additional secure parameter such as secret key and maximum transmission power which is known only to the sender and recipient node. The advantage of the proposed routing protocol is that it increases privacy of the message as well as it maintains the energy efficiency of the node.
Keywords: ad hoc networks; cryptographic protocols; energy conservation; routing protocols; telecommunication power management; telecommunication security; ad hoc network; encryption; group signature; maximum transmission power; network node; privacy protecting secure; recipient node; secret key; secure parameter; security based energy efficient routing protocol; sender node; underlying routing protocol; Ad hoc networks; Energy efficiency; Privacy; Routing; Routing protocols; Security; AdHoc; Group Signature; Manet; PPSEER; PRISM (ID#: 15-6183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992982&isnumber=6992918


Chasaki, D., “Identifying Malicious Behavior in MANET Through Data Path Information,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 567, 572, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785398
Abstract: Mobile Ad-hoc Networks are increasingly deployed in military networks as well as special kinds of civil law enforcement and emergency operation domains. Compared to wired and other types of wireless networks, MANETs are particularly vulnerable to a wide range of attacks and require high security and privacy guarantees due to their critical mission. Research efforts have focused on developing secure routing protocols for MANETs but very little attention has been given to the data plane and the information we can extract about the actual communication links. Wireless networks that require high levels of security may use data path information to validate routing information. In this paper, we develop a scheme that allows us to track and validate mobile node connectivity in order to identify potential malicious behavior. We propose a novel algorithm to accomplish connectivity tracking based on a space-efficient Bloom filter data structure and the use of aggregate signatures. We present simulation results on a real network trace that show the effectiveness of our design.
Keywords: data structures; mobile ad hoc networks; routing protocols; telecommunication computing; telecommunication security; MANET; aggregate signatures; civil law enforcement; communication links; connectivity tracking; data path information; data plane; emergency operation domain; malicious behavior identification; military networks; mobile ad-hoc networks; mobile node connectivity; potential malicious behavior; privacy guarantee; real network trace; routing information; routing protocol security; security guarantee; space-efficient bloom filter data structure; wireless networks; Ad hoc networks; Arrays; Network topology; Peer-to-peer computing; Security; Topology (ID#: 15-6184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785398&isnumber=6785290


Abu Bakar, A.; Ghapar, A.A.; Ismail, R., “Access Control and Privacy in MANET Emergency Environment,” Computer and Information Sciences (ICCOINS), 2014 International Conference on, vol., no., pp. 1, 6, 3-5 June 2014. doi:10.1109/ICCOINS.2014.6868389
Abstract: Mobile ad hoc networks (MANETs) cultivate a new research trend in today's computing. With some unique features such as scalability, fault tolerant and autonomous system enable a network to be setup with or without any trusted authority. This makes MANET suitable for the emergency and rescue operations. During an emergency situation, there is a need for the data to be shared with the rescuers. However, there are some of the personal data cannot be shared to all rescuers. Thus, the privacy and security of data becomes a main concern here. This paper addresses these issues with a combination of access control mechanism and privacy policy to ensure that the privacy and security of personal data is protected accordingly.
Keywords: authorisation; data privacy; mobile ad hoc networks; telecommunication security; MANET emergency environment; access control; autonomous system; data privacy; mobile ad hoc network; security of data; trusted authority; Access control; Authentication; Data privacy; Hospitals; Mobile ad hoc networks; Privacy; Access Control; Emergency; MANET (ID#: 15-6185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868389&isnumber=6868339


Liu Licai; Yin Lihua; Guo Yunchuan; Fang Bingxing, “Bargaining-Based Dynamic Decision for Cooperative Authentication in MANETs,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 212, 220, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.32
Abstract: In MANETs, cooperative authentication, requiring cooperation of neighbor nodes, is a significant authenticate technique. However, when nodes participate in cooperation, their location may easily be tracked by misbehaving nodes, meanwhile, their resources will be consumed. These two factors lead selfish nodes reluctant participate in cooperation and decrease the probability of correct authentication. To encourage nodes to take part in cooperation, we proposed a bargaining-based dynamic game model for cooperative authentication to analyze dynamic behaviors of nodes and help nodes decide whether to participate in cooperation or not. Further, to analyze the dynamic decision-making of nodes, we discussed two situations — complete information and incomplete information, respectively. Under complete information, Sub game Perfect Nash Equilibriums are obtained to guide nodes to choose its optimal strategy to maximize its utility. In reality, nodes often do not have good knowledge about others’ utility (this case is often called incomplete information). To dealt with this case, Perfect Bayesian Nash Equilibrium is established to eliminate the implausible Equilibriums. Based on the model, we designed two algorithms for complete information and incomplete information,, and the simulation results demonstrate that in our model nodes participating in cooperation will maximize their location privacy and minimize their resources consumption with ensuing the probability of correct authentication. Both of algorithms can improve the success rate of cooperative authentication and extend the network lifetime to 160%–360.6%.
Keywords: cooperative communication; decision making; game theory; message authentication; mobile ad hoc networks; probability; telecommunication security; MANET; bargaining-based dynamic decision; bargaining-based dynamic game model; cooperative authentication; dynamic decision-making; location privacy; mobile ad hoc networks; network lifetime; perfect Bayesian Nash equilibrium; resources consumption; subgame perfect Nash equilibriums; Ad hoc networks; Authentication; Games; Mobile computing; Principal component analysis; Privacy; Vehicle dynamics; Cooperative Authentication; Dynamic Game; Incentive Strategy (ID#: 15-6186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011253&isnumber=7011202


Chongxian Guo; Huaqiang Xu; Lei Ju; Zhiping Jia; Jihai Xu, “A High-Performance Distributed Certificate Revocation Scheme for Mobile Ad Hoc Networks,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 156, 163, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.136
Abstract: Mobile ad hoc networks (MANETs) are wireless networks which have a wide range applications due to their dynamic topologies and easy to deployment. However, such networks are also more vulnerable to attacks compared with traditional wireless networks. Certificate revocation is an effective mechanism for providing network security services. Existing schemes are not well suited for MANETs because of incurring much overhead or bring low accuracy on certificate revocation. Therefore, we propose a high-performance distributed certificate revocation scheme in which certificates of malicious nodes will be revoked quickly and accurately. Certificate revocation is the result of the collaborative effect of multiple accusations. For diluting damages to networks, one accusation is enough to limit the accusation function of the accused node. To enhance the accuracy of certificate revocation, our scheme requires nodes just accepting those accusations in which trust levels of accuser nodes are not less than accused nodes’. To guarantee the rapidity, we restore accusation functions of the falsely accused nodes after revoking certificates of all malicious nodes who ever accused them. Moreover, we design one mechanism to reward nodes who ever accused those malicious nodes, and in return, accusations made by them will accelerate the certificate revocation processes of other malicious nodes. Simulation results demonstrate the effectiveness and efficiency of our scheme in certificate revocation. In addition, our scheme achieves a great improvement of just limiting accusation functions of malicious nodes.
Keywords: mobile ad hoc networks; telecommunication security; MANET; high-performance distributed certificate revocation scheme; malicious nodes; mobile ad hoc networks; Accuracy; Communication networks; Educational institutions; Mobile ad hoc networks; Mobile computing; Security; accusation function; certificate revocation; mobile ad hoc networks (MANETs); trust (ID#: 15-6187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011246&isnumber=7011202


Hui Xia; Jia Yu; Zhi-Yong Zhang; Xiang-Guo Cheng; Zhen-Kuan Pan, “Trust-Enhanced Multicast Routing Protocol Based on Node’s Behavior Assessment for MANETs,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 473, 480, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.60
Abstract: A mobile ad hoc network (MANET) is a self-configuring network of mobile nodes connected by wireless links without fixed infrastructure, which is originally designed for a cooperative environment. However, MANETs are subjected to a variety of attacks by malicious nodes, in particular for attacks on the packet routing. Compared with traditional cryptosystem based security mechanisms, trust-enhanced routing protocol could provide a better quality of service. In this study, we abstract a basic decentralized effective trust inference model based on node’s behavior assessment, where each peer assigns a trust value for a set of peers of interest. In this model, we introduce the ‘voting’ mechanism to access the recommending experience (or ratings), in order to reduce the cost of the algorithm design and the system overhead. Then combined with this trust model, a novel trust-enhanced multicast routing protocol (TeMR) is proposed. This new protocol introduces the group-shared tree strategy, which establishes more efficient multicast routes since it uses ‘trust’ factor to improve the efficiency and robustness of the forwarding tree. Moreover, it provides a flexible and feasible approach in routing decision making with trust constraint and malicious node detection. Experiments have been conducted to evaluate the effectiveness of the proposed protocol.
Keywords: decision making; mobile ad hoc networks; multicast protocols; quality of service; radio links; routing protocols; MANET; cryptosystem; decision making; group-shared tree strategy; malicious node detection; malicious nodes; mobile ad hoc network; mobile nodes; packet routing; quality of service; trust constraint; trust inference model; trust-enhanced multicast routing protocol; wireless links; Ad hoc networks; Mobile computing; Monitoring; Peer-to-peer computing; Routing; Routing protocols; Ad Hoc Network; Malicious Node; Routing Decision Making; Trust Constraint; Trust Model; Trust-enhanced Multicast Routing (ID#: 15-6188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011284&isnumber=7011202


Bijon, K.Z.; Haque, M.M.; Hasan, R., “A Trust Based Information Sharing Model (TRUISM) in MANET in the Presence of Uncertainty,” Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, vol., no., pp. 347, 354, 23-24 July 2014. doi:10.1109/PST.2014.6890959
Abstract: In the absence of centralized trusted authorities (CTA), security is one of the foremost concern in Mobile Ad-hoc Networks (MANET) as the network is open to attacks and unreliability in the presence of malicious nodes (devices). With increasing demand of interactions among nodes, trust based information sharing needs more stringent rules to ensure security in this pervasive computing scenario. In this paper, we present a novel multi-hop recommendation based trust management scheme (TRUISM). We adapt famous Dempster-Shafer theory that can efficiently combine recommendations from multiple devices in the presence of unreliable and malicious recommendations. A novel recommendation-routing protocol named ‘buffering on-the-fly’ has been introduced to reduce the number of recommendation traffic by storing trust values in intermediate nodes. TRUISM also provides a flexible behavioral model for trust computation where a node can prioritize recommendations based on its requirements. Evaluation result shows that our model not only performs well in the presence of contradictory recommendations but also ensures a faster and scalable trust based information sharing by reducing the overall packet flow in the system.
Keywords: inference mechanisms; mobile ad hoc networks; trusted computing; ubiquitous computing; uncertainty handling; CTA; Dempster-Shafer theory; MANET; TRUISM model; buffering on-the-fly protocol; centralized trusted authorities; malicious nodes; mobile adhoc network; multihop recommendation based trust management scheme; pervasive computing; trust based Information sharing model; trust computation; trust values; Aging; Computational modeling; Information management; Mathematical model; Mobile ad hoc networks; Reliability; Security; Dempster-Shafer; MANET; Recommendation; Trust (ID#: 15-6189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890959&isnumber=6890911


Guo Yunchuan; Yin Lihua; Liu Licai; Fang Binxing, “Utility-Based Cooperative Decision in Cooperative Authentication,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 1006, 1014, April 27 2014–May 2 2014. doi:10.1109/INFOCOM.2014.6848030
Abstract: In mobile networks, cooperative authentication is an efficient way to recognize false identities and messages. However, an attacker can track the location of cooperative mobile nodes by monitoring their communications. Moreover, mobile nodes consume their own resources when cooperating with other nodes in the process of authentication. These two factors cause selfish mobile nodes not to actively participate in authentication. In this paper, a bargaining-based game for cooperative authentication is proposed to help nodes decide whether to participate in authentication or not, and our strategy guarantees that mobile nodes participating in cooperative authentication can obtain the maximum utility, all at an acceptable cost. We obtain Nash equilibrium in static complete information games. To address the problem of nodes not knowing the utility of other nodes, incomplete information games for cooperative authentication are established. We also develop an algorithm based on incomplete information games to maximize every node’s utility. The simulation results demonstrate that our strategy has the ability to guarantee authentication probability and increase the number of successful authentications.
Keywords: game theory; mobile ad hoc networks; probability; telecommunication security; MANET; Nash equilibrium; authentication probability; authentication process; cooperative authentication; cooperative mobile nodes; information games; mobile ad hoc network; mobile networks; mobile nodes; utility based cooperative decision; Bismuth; Computers; Conferences; High definition video; Human computer interaction; Cooperative authentication; games; location privacy (ID#: 15-6190)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848030&isnumber=6847911


Bhati, B.S.; Venkataram, P., “Data Privacy Preserving Scheme in MANETs,” Internet Security (WorldCIS), 2014 World Congress on, vol., no., pp. 22, 23, 8-10 Dec. 2014. doi:10.1109/WorldCIS.2014.7028159
Abstract: Data privacy is one among the challenging issues in Mobile Adhoc NETworks (MANETs), which are deployed in hostile environments to transfer sensitive data through multi-hop routing. The undesired disclosure of data can result in breach of data privacy, and can be used in launching several attacks. Many of the works achieved data privacy by using approaches such as data transformation, data perturbation, etc. But, these approaches introduce high computational overheads and delays in a MANET. To minimize the computations in preserving data privacy, we have proposed a computational intelligence based data privacy scheme. In the scheme we use data anonymization approach, where rough set theory is used to determine the data attributes to be anonymized. Dynamically changing multiple routes are established between a sender and a receiver, by selecting more than one trusted 1-hop neighbor nodes for data transfer in each routing step. Anonymity of the receiver is also discussed. The work has been simulated in different network sizes with several data transfers. The results are quite encouraging.
Keywords: data privacy; mobile ad hoc networks; rough set theory; security of data; telecommunication network routing; telecommunication security; MANET; computation minimization; computational intelligence; computational overheads; data anonymization approach; data attributes; data perturbation; data privacy preserving scheme; data transfers; data transformation; delays; mobile adhoc networks; multihop routing; receiver anonymity; rough set theory; Artificial neural networks; Bandwidth; Batteries; Mobile ad hoc networks; Mobile computing; Anonymity; Data Attributes; Data Privacy; Mobile Adhoc Network; Rough Sets (ID#: 15-6191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028159&isnumber=7027983


Doumiati, S.; Al Choikani, T.; Artail, H., “LPS for LBS: Location-Privacy Scheme for Location-Based Services,” Computer Systems and Applications (AICCSA), 2014 IEEE/ACS 11th International Conference on, vol., no., pp.449, 456, 10-13 Nov. 2014. doi:10.1109/AICCSA.2014.7073233
Abstract: A Vehicular Ad-hoc Network (VANET) is a type of Mobile Ad-hoc Network (MANET) that is used to provide communications between nearby vehicles on a hand, and between vehicles and fixed infrastructure on the roadside on the other hand. VANET is not only used for road safety and driving comfort but also for infotainment. An application area which is expected to greatly benefit from this advanced technology is Location Based Service (LBS): a service which helps users in finding nearby places. However, this application raises a privacy issue for these users since it can profile them and track their physical location. Therefore, to successfully deploy LBS, user’s privacy is one of major challenges that must be addressed. In this paper, we propose a location privacy protection scheme to encourage drivers to use this service without any risk of being pursued. Our system was implemented using NS2 network simulator and found to achieve high values of anonymity.
Keywords: data privacy; mobility management (mobile radio); telecommunication security; vehicular ad hoc networks; LBS; LPS; MANET; NS2 network simulator; VANET; driving comfort; infotainment; location privacy protection scheme; location-based services; mobile ad-hoc network; physical location tracking; road safety; vehicular ad-hoc network; Privacy; Public key; Roads; Vehicles; Vehicular ad hoc networks; Location Based Service (LBS); Vehicular ad-hoc networks (VANET); anonymity; attacks; privacy (ID#: 15-6192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073233&isnumber=7073167
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


PKI Trust Models, 2014

 

 
SoS Logo

PKI Trust Models

2014


The Public Key Infrastructure (PKI) is designed to ensure the security of electronic transactions and the exchange of sensitive information through cryptographic keys and certificates. Several PKI trust models are proposed in the literature to model trust relationship and trust propagation. The research cited here looks at several of those models, particularly in the area of ad hoc networks. The research was presented in 2014.



Jain, A.; Khare, G.; Rajan, A.; Manjhi, N.; Pathy, D.; Rawat, A., “Implementation Issues and Challenges with PKI Infrastructure and Its Integration with In-House Developed IT Applications,” IT in Business, Industry and Government (CSIBIG), 2014 Conference on, vol., no., pp. 1, 5, 8-9 March 2014. doi:10.1109/CSIBIG.2014.7056939
Abstract: Rapid deployment of e-governance applications emphasis on need for security and authentication. Many emerging technologies are being developed to fulfil security requirements. The major concern in e-governance transactions is the need for replacement of hand-written signature with an `online' signature. Further, since web enabled applications are prone to various types of security breaches, the discussion on robust and authenticated e-governance transactions is incomplete without consideration of ‘security’ as a prominent aspect of ‘online signatures’. An e-signature may be considered as a type of electronic authentication which can be achieved by means of different types of technologies. Today there are wide range of technologies, products and solutions for securing the electronic infrastructure of any organization. The levels of security implemented should be commensurate with the level of complexity of the organizational data and applications in use. To operate critical web enabled applications, organizations need high-level, certificate-based security provided by a Public Key Infrastructure (PKI). PKI protects applications that demand the highest level of security, web services based business process automation, digital form signing and electronic commerce. PKI is a consistently evolving security process in government and ecommerce. It is the most appropriate security mechanism for securing data, identifying users, and establishing a chain of trust to secure electronic infrastructure. PKI integrates digital identities and signatures to present an end-to-end trust model. This paper discusses the issues and challenges associated with setting up in-house certifying authority and integrating PKI functionality into in-house developed IT applications in our organization.
Keywords: Web services; digital signatures; public key cryptography; PKI infrastructure; Web services based business process automation; digital form signing; e-governance applications; e-signature; electronic authentication; electronic commerce; hand-written signature; in-house developed IT applications; online signatures; public key infrastructure; security breaches; security requirements; web enabled applications; Authorization; Browsers; Cryptography; Databases; Organizations; Reliability; Servers; Digital Signature Certificate; Digital Signing; Oracle Certifying Authority; PKI; Workflow (ID#: 15-6154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056939&isnumber=7056912


Serna, J.; Morales, R.; Medina, M.; Luna, J., “Trustworthy Communications in Vehicular Ad Hoc NETworks,” Internet of Things (WF-IoT), 2014 IEEE World Forum on, vol., no., pp. 247, 252, 6-8 March 2014. doi:10.1109/WF-IoT.2014.6803167
Abstract: Vehicular Ad-Hoc NETworks (VANETs), a pillar for the Internet of Vehicles, aim to improve road safety by preventing and reducing traffic accidents. While VANETs offer a great variety of safety and infotainment applications, there remain a number of security and privacy challenges, such as, user profiling and vehicle tracking, which, must be addressed. This paper contributes with a framework to address security and privacy issues in VANETs. The proposed framework consists of i) an inter-domain authentication system able to provide a near realtime certificate status service, ii) a mechanism to quantitatively evaluate the trust level of a CA and establish a on-the-fly interoperability relationship, and iii) a privacy enhancing model that addresses privacy in terms of linkability.
Keywords: intelligent transportation systems; road safety; road vehicles; telecommunication security; trusted computing; vehicular ad hoc networks; Internet of vehicles; VANET; intelligent transportation systems; interdomain authentication system; road safety; traffic accidents; trustworthy communications; user profiling; vehicle tracking; vehicular ad hoc networks; Authentication; Internet; Privacy; Protocols; Vehicles; Vehicular ad hoc networks; Anonymity; PKI; Privacy; Security; VANETs (ID#: 15-6166)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803167&isnumber=6803102


Wagan, A.A.; Low Tang Jung, “Security Framework for Low Latency VANET Applications,” Computer and Information Sciences (ICCOINS), 2014 International Conference on, vol., no., pp.1, 6, 3-5 June 2014. doi:10.1109/ICCOINS.2014.6868395
Abstract: Vehicular Ad hoc Network (VANET) is a communication network for vehicles on the highway. Presently, VANET technology is surrounded with security challenges and it is essentially important for VANET to successfully implement a security measure according to the safety applications requirements. Many academia researcher have suggested a various solutions to encounter security attacks and also proposed models to strengthen security characteristics. The current most suitable security scheme for VANET is an Elliptic Curve Digital Signature Algorithm (ECDSA). However ECDSA is associated with high computational cost, therefore it is considered an inappropriate approach for low latency safety applications. In this study, a security framework is proposed to solve above issues; a proposed framework utilizes both traditional cryptographic schemes; asymmetric PKI and symmetric respectively. The asymmetric cryptography scheme is used to securely exchange the key and authentication process and symmetric cryptography scheme is used for low latency safety application (especially time critical safety applications). The suggested framework is not only reduce the latency but also enhance the security cryptography characteristics by establishing trust between ongoing vehicles.
Keywords: digital signatures; public key cryptography; telecommunication security; vehicular ad hoc networks; ECDSA; VANET technology; asymmetric PKI; cryptographic schemes; elliptic curve digital signature algorithm; low latency VANET applications; public key infrastructure; safety application requirements; security attacks; security characteristics; security framework; security measure; symmetric PKI; vehicular ad hoc network; Cryptography; Protocols; Road transportation; Safety; Vehicles; Vehicular ad hoc networks; Asymmetric and Symmetric Cryptography; Latency; TPM; VANET (ID#: 15-6167)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868395&isnumber=6868339


Vilhan, P.; Hudec, L., “Cluster Glue — Improving Service Reachability in PKI Enabled MANET,” Computer Modelling and Simulation (UKSim), 2014 UKSim-AMSS 16th International Conference on, vol., no., pp. 494, 499, 26-28 March 2014. doi:10.1109/UKSim.2014.31
Abstract: This paper presents the revision of our concept to improve the public key infrastructure deploy ability and service reachability in the mobile ad-hoc networks routed by B.A.T.M.A.N. Advanced. We have extended the B.A.T.M.A.N. Advanced routing protocol with authentication and authorization of routing updates based on X.509 certificates. Furthermore we have determined several levels of node’s trust-worthiness and interoperability between trusted authorities in the network. To mitigate extra load caused by renewing of certificates, we have identified critical factors affecting it and designed the computation formula for optimal amount of cross certificates issued by trusted authority. To further improve the service reachablity in highly mobile networks in earlier stages of PKI deployment, we have designed the Cluster Glue. The Cluster Glue helps to connect groups of nodes from different parts of network which owns the certificates issued by the same authority. Thanks to these modifications we are able to mitigate various security risks and provide the more secure route for messages transmitting through the network. Preliminary results were verified by simulations.
Keywords: authorisation; mobile ad hoc networks; public key cryptography; routing protocols; telecommunication security; B.A.T.M.A.N. Advanced routing protocol; PKI enabled MANET; X.509 certificates; authentication; authorization; cluster glue; cross certificates; mobile ad hoc networks; public key infrastructure; security risks; service reachability; trusted authority; Mobile ad hoc networks; Mobile communication; Peer-to-peer computing; Routing; Routing protocols; Security; BAT-MAN; Cluster Glue; MANET; PKI; RSA; ad-hoc; public key infrastructure; routing; security (ID#: 15-6168)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7046116&isnumber=7045636


El Uahhabi, Z.; El Bakkali, H., “A Comparative Study of PKI Trust Models,” Next Generation Networks and Services (NGNS), 2014 Fifth International Conference on, vol., no., pp. 255, 261, 28-30 May 2014. doi:10.1109/NGNS.2014.6990261
Abstract: Public Key Infrastructure (PKI) is a security technology designed to ensure the security of electronic transactions and the exchange of sensitive information through cryptographic keys and certificates. Several PKI trust models are proposed in the literature to model trust relationship and trust propagation. In this paper, we present different PKI trust models architectures. We then analyze and compare some proposed PKI trust models for e-services applications.
Keywords: Internet; computer network security; electronic data interchange; public key cryptography; trusted computing; Internet security; PKI trust model; cryptographic key; e-services application; electronic transaction security; information exchange security; public key infrastructure; trust propagation; Adaptation models; Analytical models; Bridges; Certification; Privacy; Public key; PKI; e-health; e-services; trust model (ID#: 15-6169)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990261&isnumber=6990210


Binod Vaidya, Dimitrios Makrakis, Hussein Mouftah; “Effective Public Key Infrastructure for Vehicle-to-Grid Network,” DIVANet ’14, Proceedings of the Fourth ACM International Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications, September 2014, Pages 95-101. doi:10.1145/2656346.2656348
Abstract: A growth of electric vehicle (EV) technologies likely leads a fundamental shift not only in transportation sector but also in the existing electric power grid infrastructure. In Smart grid infrastructure, vehicle-to-grid (V2G) network can be formed such that participating EVs can be used to store energy and supply this energy back to the power grid when required. To realize proper deployment of V2G network, charging infrastructure having various entities such as charging facility, clearinghouse, and energy provider has to be constructed. So use of Public key infrastructure (PKI) is indispensable for provisioning security solution in V2G network. The ISO/IEC 15118 standard is ascribed that incorporates X.509 PKI solution for V2G network. However, as traditional X.509 based PKI for V2G network has several shortcomings, we have proposed an effectual PKI for a V2G network that is built on based on elliptic curve cryptography and self-certified public key technique having implicit certificate to reduce certificate size and certificate verification time. We show that the proposed solution outperforms the existing solution.
Keywords: ECC, ISO/IEC 15118, PKI, X.509, implicit certificate, smart grid, vehicle-to-grid network (ID#: 15-6170)
URL:  http://doi.acm.org/10.1145/2656346.2656348


Jingwei Huang, David M. Nicol; “Evidence-Based Trust Reasoning,” HotSoS ’14, Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 17. doi:10.1145/2600176.2600193
Abstract: Trust is a necessary component in cybersecurity. It is a common task for a system to make a decision about whether or not to trust the credential of an entity from another domain, issued by a third party. Generally, in the cyberspace, connected and interacting systems largely rely on each other with respect to security, privacy, and performance. In their interactions, one entity or system needs to trust others, and this “trust” frequently becomes a vulnerability of that system. Aiming at mitigating the vulnerability, we are developing a computational theory of trust, as a part of our efforts towards Science of Security. Previously, we developed a formal-semantics-based calculus of trust [3, 2], in which trust can be calculated based on a trustor’s direct observation on the performance of the trustee, or based on a trust network. In this paper, we construct a framework for making trust reasoning based on the observed evidence. We take privacy in cloud computing as a driving application case [5].
Keywords: evidence-based trust, privacy, trust model (ID#: 15-6171)
URL:  http://doi.acm.org/10.1145/2600176.2600193


Adam Bates, Joe Pletcher, Tyler Nichols, Braden Hollembaek, Kevin R.B. Butler; “Forced Perspectives: Evaluating an SSL Trust Enhancement at Scale,” IMC ’14, Proceedings of the 2014 Conference on Internet Measurement Conference, November 2014, Pages 503-510. doi:10.1145/2663716.2663759
Abstract: The certificate authority (CA) PKI system has been used for decades as a means of providing domain identity verification services throughout the Internet, but a growing body of evidence suggests that our trust in this system is misplaced. A recently proposed CA alternative, Convergence, extends the Network Perspectives system of multi-path probing to perform certificate verification. Unfortunately, adoption of Convergence and other SSL/TLS trust enhancements has been slow, in part because it is unknown how these systems perform against large workloads and realistic conditions. In this work we ask the question “What if all certificates were validated with Convergence?” We perform a case study of deploying Convergence under realistic workloads with a university-wide trace of real-world HTTPS activity. By synthesizing Convergence requests, we effectively force perspectives-based verification on an entire university in simulation. We demonstrate that through local and server caching, a single Convergence deployment can meet the requirements of millions of SSL flows while imposing under 0.1% network overhead and requiring as little as 108 ms to validate a certificate, making Convergence a worthwhile candidate for further deployment and adoption.
Keywords: https, public-key certificates, ssl, tls (ID#: 15-6172)
URL:  http://doi.acm.org/10.1145/2663716.2663759


David Basin, Cas Cremers, Tiffany Hyun-Jin Kim, Adrian Perrig, Ralf Sasse, Pawel Szalachowski; “ARPKI: Attack Resilient Public-Key Infrastructure,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 382-393. doi:10.1145/2660267.2660298
Abstract: We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS. ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.
Keywords: attack resilience, certificate validation, formal validation, public log servers, public-key infrastructure, tls, TLS (ID#: 15-6173)
URL: http://doi.acm.org/10.1145/2660267.2660298


Pawel Szalachowski, Stephanos Matsumoto, Adrian Perrig; “PoliCert: Secure and Flexible TLS Certificate Management,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 406-417. doi:10.1145/2660267.2660355
Abstract: The recently proposed concept of publicly verifiable logs is a promising approach for mitigating security issues and threats of the current Public-Key Infrastructure (PKI). Although much progress has been made towards a more secure infrastructure, the currently proposed approaches still suffer from security vulnerabilities, inefficiency, or incremental deployment challenges. In this paper we propose PoliCert, a comprehensive log-based and domain-oriented architecture that enhances the security of PKI by offering: a) stronger authentication of a domain’s public keys, b) comprehensive and clean mechanisms for certificate management, and c) an incentivised incremental deployment plan. Surprisingly, our approach has proved fruitful in addressing other seemingly unrelated problems such as TLS-related error handling and client/server misconfiguration.
Keywords: certificate validation, public log servers, public-key certificate, public-key infrastructure, security policy, ssl, tls, TLS (ID#: 15-6174)
URL: http://doi.acm.org/10.1145/2660267.2660355


Qingji Zheng, Wei Zhu, Jiafeng Zhu, Xinwen Zhang; “Improved Anonymous Proxy Re-Encryption with CCA Security,” ASIA CCS ’14, Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 249-258.  doi:10.1145/2590296.2590322
Abstract: Outsourcing private data and heavy computation tasks to the cloud may lead to privacy breach as attackers (e.g., malicious outsiders or cloud administrators) may correlate any relevant information to penetrate information of their interests. Therefore, how to preserve cloud users’ privacy has been a top concern when adopting cloud solutions. In this paper, we investigate the identity privacy problem for the proxy re-encryption, which allows any third party (e.g., cloud) to re-encrypt ciphertexts in order to delegate the decryption right from one to another user. The relevant identity information, e.g., whose ciphertext was re-encrypted to the ciphertext under whose public key, may leak because re-encryption keys and ciphertexts (before and after re-encryption) are known to the third party. We review prior anonymity (identity privacy) notions, and find that these notions are either impractical or too weak. To address this problem thoroughly, we rigorously define the anonymity notion that not only embraces the prior anonymity notions but also captures the necessary anonymity requirement for practical applications. In addition, we propose a new and efficient proxy re-encryption scheme. The scheme satisfies the proposed anonymity notion under the Squared Decisional Bilinear Diffie-Hellman assumption and achieves security against chosen ciphertext attack under the Decisional Bilinear Diffie-Hellman assumption in the random oracle model. To the best of our knowledge, it is the first proxy re-encryption scheme attaining both chosen-ciphertext security and anonymity simultaneously. We implement a prototype based on the proposed proxy re-encryption scheme and the performance study shows that it is efficient.
Keywords: anonymity, chosen-ciphertext security, outsourced computation, proxy re-encryption (ID#: 15-6175)
URL: http://doi.acm.org/10.1145/2590296.2590322


Haya Shulman; “Pretty Bad Privacy: Pitfalls of DNS Encryption,” WPES ’14, Proceedings of the 13th Workshop on Privacy in the Electronic Society, November 2014, Pages 191-200. doi:10.1145/2665943.2665959
Abstract: As awareness for privacy of Domain Name System (DNS) is increasing, a number of mechanisms for encryption of DNS packets were proposed. We study the prominent defences, focusing on the privacy guarantees, interoperability with the DNS infrastructure, and the efficiency overhead. In particular: We explore dependencies in DNS and show techniques that utilise side channel leaks, due to transitive trust, allowing to infer information about the target domain in an encrypted DNS packet.  We examine common DNS servers configurations and show that the proposals are expected to encounter deployment obstacles with (at least) 38% of 50K-top Alexa domains and (at least) 12% of the top-level domains (TLDs), and will disrupt the DNS functionality and availability for clients. We show that due to the non-interoperability with the caches, the proposals for end-to-end encryption may have a prohibitive traffic overhead on the name servers. Our work indicates that further study may be required to adjust the proposals to stand up to their security guarantees, and to make them suitable for the common servers’ configurations in the DNS infrastructure. Our study is based on collection and analysis of the DNS traffic of 50K-top Alexa domains and 568 TLDs.
Keywords: dns, dns caching, dns encryption, dns infrastructure, dns privacy, dns security, side channel attacks, transitive trust dependencies (ID#: 15-6176)
URL:  http://doi.acm.org/10.1145/2665943.2665959


Ethan Heilman, Danny Cooper, Leonid Reyzin, Sharon Goldberg; “From the Consent of the Routed: Improving the Transparency of the RPKI,” SIGCOMM ’14, Proceedings of the 2014 ACM Conference on SIGCOMM, August 2014, Pages 51-62. doi:10.1145/2619239.2626293
Abstract: The Resource Public Key Infrastructure (RPKI) is a new infrastructure that prevents some of the most devastating attacks on interdomain routing. However, the security benefits provided by the RPKI are accomplished via an architecture that empowers centralized authorities to unilaterally revoke any IP prefixes under their control. We propose mechanisms to improve the transparency of the RPKI, in order to mitigate the risk that it will be used for IP address takedowns. First, we present tools that detect and visualize changes to the RPKI that can potentially take down an IP prefix. We use our tools to identify errors and revocations in the production RPKI. Next, we propose modifications to the RPKI's architecture to (1) require any revocation of IP address space to receive consent from all impacted parties, and (2) detect when misbehaving authorities fail to obtain consent. We present a security analysis of our architecture, and estimate its overhead using data-driven analysis.
Keywords: RPKI, public key infrastructures, security, transparency (ID#: 15-6177)
URL:  http://doi.acm.org/10.1145/2619239.2626293


Tiffany Hyun-Jin Kim, Cristina Basescu, Limin Jia, Soo Bum Lee, Yih-Chun Hu, Adrian Perrig; “Lightweight Source Authentication and Path Validation,” SIGCOMM ’14, Proceedings of the 2014 ACM Conference on SIGCOMM, August 2014, Pages 271-282. doi:10.1145/2619239.2626323
Abstract: In-network source authentication and path validation are fundamental primitives to construct higher-level security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, currently proposed solutions either fall short of addressing important security concerns or require a substantial amount of router overhead. In this paper, we propose lightweight, scalable, and secure protocols for shared key setup, source authentication, and path validation. Our prototype implementation demonstrates the efficiency and scalability of the protocols, especially for software-based implementations.
Keywords: path validation, retroactive key setup, source authentication (ID#: 15-6178)
URL: http://doi.acm.org/10.1145/2619239.2626323


Julian Horsch, Konstantin Böttinger, Michael Weiß, Sascha Wessel, Frederic Stumpf; “TrustID: Trustworthy Identities for Untrusted Mobile Devices,” CODASPY ’14, Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 281-288. doi:10.1145/2557547.2557593
Abstract: Identity theft has deep impacts in today's mobile ubiquitous environments. At the same time, digital identities are usually still protected by simple passwords or other insufficient security mechanisms. In this paper, we present the TrustID architecture and protocols to improve this situation. Our architecture utilizes a Secure Element (SE) to store multiple context-specific identities securely in a mobile device, e.g., a smartphone. We introduce protocols for securely deriving identities from a strong root identity into the SE inside the smartphone as well as for using the newly derived IDs. Both protocols do not require a trustworthy smartphone operating system or a Trusted Execution Environment. In order to achieve this, our concept includes a secure combined PIN entry mechanism for user authentication, which prevents attacks even on a malicious device. To show the feasibility of our approach, we implemented a prototype running on a Samsung Galaxy SIII smartphone utilizing a microSD card SE. The German identity card nPA is used as root identity to derive context-specific identities.
Keywords: android, combined pin entry, identity derivation, identity provider, mobile security, npa, secure element, smartphone (ID#: 15-6179)
URL: http://doi.acm.org/10.1145/2557547.2557593
 


Teklemariam Tsegay Tesfay, Jean-Pierre Hubaux, Jean-Yves Le Boudec, Philippe Oechslin; “Cyber-Secure Communication Architecture for Active Power Distribution Networks,” SAC ’14, Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 545-552.  doi:10.1145/2554850.2555082
Abstract: Active power distribution networks require sophisticated monitoring and control strategies for efficient energy management and automatic adaptive reconfiguration of the power infrastructure. Such requirements are realised by deploying a large number of various electronic automation and communication field devices, such as Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IEDs), and a reliable two-way communication infrastructure that facilitates transfer of sensor data and control signals. In this paper, we perform a detailed threat analysis in a typical active distribution network’s automation system. We also propose mechanisms by which we can design a secure and reliable communication network for an active distribution network that is resilient to insider and outsider malicious attacks, natural disasters, and other unintended failure. The proposed security solution also guarantees that an attacker is not able to install a rogue field device by exploiting an emergency situation during islanding.
Keywords: PKI, active distribution network, authentication, islanding, smart grid, smart grid security, unauthorised access (ID#: 15-6180)
URL: http://doi.acm.org/10.1145/2554850.2555082


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Router System Security, 2014

 

 
SoS Logo

Router System Security

2014


Routers are among the most ubiquitous electronic devices in use. Basic security from protocols and encryption can be readily achieved, but routing has many leaks. The articles cited here look at route leaks, stack protection, and mobile platforms using Tor, iOS, and Android OS, among other topics. They were published in 2014.



Siddiqui, M.S.; Montero, D.; Yannuzzi, M.; Serral-Gracia, R.; Masip-Bruin, X., “Diagnosis of Route Leaks Among Autonomous Systems in the Internet,” Smart Communications in Network Technologies (SaCoNeT), 2014 International Conference on, vol., no., pp. 1, 6, 18-20 June 2014. doi:10.1109/SaCoNeT.2014.6867765
Abstract: Border Gateway Protocol (BGP) is the defacto inter-domain routing protocol in the Internet. It was designed without an inherent security mechanism and hence is prone to a number of vulnerabilities which can cause large scale disruption in the Internet. Route leak is one such inter-domain routing security problem which has the potential to cause wide-scale Internet service failure. Route leaks occur when Autonomous systems violate export policies while exporting routes. As BGP security has been an active research area for over a decade now, several security strategies were proposed, some of which either advocated complete replacement of the BGP or addition of new features in BGP, but they failed to achieve global acceptance. Even the most recent effort in this regard, lead by the Secure Inter-Domain Routing (SIDR) working group (WG) of IETF fails to counter all the BGP anomalies, especially route leaks. In this paper we look at the efforts in countering the policy related BGP problems and provide an analytical insights into why they are ineffective. We contend a new direction for future research in managing the broader security issues in the inter-domain routing. In that light, we propose a naive approach for countering the route leak problem by analyzing the information available at hand, such as the RIB of the router. The main purpose of this paper was to position and highlight the autonomous smart analytical approach for tackling policy related BGP security issues.
Keywords: Internet; computer network security; routing protocols; BGP security issue; IETF; Internet autonomous systems; Secure InterDomain Routing working group; border gateway protocol; interdomain routing protocol; interdomain routing security problem; route leak diagnosis; security issues; IP networks; Radiation detectors; Routing; Routing protocols; Security (ID#: 15-6675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867765&isnumber=6867755

 

Peng Wu; Wolf, T., “Stack Protection in Packet Processing Systems,” Computing, Networking and Communications (ICNC), 2014 International Conference on, vol., no., pp. 53, 57, 3-6 Feb. 2014. doi:10.1109/ICCNC.2014.6785304
Abstract: Network security is a critical aspect of Internet operations. Most network security research has focused on protecting end-systems from hacking and denial-of-service attacks. In our work, we address hacking attacks on the network infrastructure itself. In particular, we explore data plane stack smashing attacks that have demonstrated successfully on network processor systems. We explore their use in the context of software routers that are implemented on top of general-purpose processor and operating systems. We discuss how such attacks can be adapted to these router systems and how stack protection mechanisms can be used as defense. We show experimental results that demonstrate the effectiveness of these stack protection mechanisms.
Keywords: Internet; computer crime; computer network security; general purpose computers; operating systems (computers); packet switching; telecommunication network routing Internet; computer network security; denial of service attacks; end systems protection; general purpose processor; hacking attacks; network infrastructure; network processor systems; operating systems; packet processing system; router systems; smashing attacks; software routers; stack protection mechanism; Computer architecture; Information security; Linux; Operating systems; Protocols; attack; defense; network security; stack smashing (ID#: 15-6676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785304&isnumber=6785290


Frantti, T.; Röning, J., “A Risk-Driven Security Analysis for a Bluetooth Low Energy Based Microdata Ecosystem,” Ubiquitous and Future Networks (ICUFN), 2014 Sixth International Conf on, vol., no., pp. 69, 74, 8-11 July 2014. doi:10.1109/ICUFN.2014.6876753
Abstract: This paper presents security requirements, risk survey, security objectives, and security controls of the Bluetooth Low Energy (BLE) based Catcher devices and the related Microdata Ecosystem of Ceruus company for a secure, energy efficient and scalable wireless content distribution. The system architecture was composed of the Mobile Cellular Network (MCN) based gateway/edge router device, such as Smart Phone, Catchers, and web based application servers. It was assumed that MCN based gateways communicate with application servers and surrounding Catcher devices. The analysis of the scenarios developed highlighted common aspects and led to security requirements, objectives, and controls that were used to define and develop the Catcher and MCN based router devices and guide the system architecture design of the Microdata Ecosystem.
Keywords: Bluetooth; cellular radio; computer network security; network servers; telecommunication network routing; BLE based catcher devices; Bluetooth low energy based microdata ecosystem; Ceruus company; MCN based gateway-edge router device; application servers; energy efficient wireless content distribution; mobile cellular network; risk-driven security analysis; wireless content distribution scalability; wireless content distribution security; Authentication; Ecosystems; Encryption; Logic gates; Protocols; Servers; Internet of Things; authentication; authorization; confidentiality; integrity; security; timeliness (ID#: 15-6677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876753&isnumber=6876727


Wang Ming-Hao, “The Security Analysis and Attacks Detection of OSPF Routing Protocol,” Intelligent Computation Technology and Automation (ICICTA), 2014 7th International Conference on, vol., no., pp. 836, 839, 25-26 Oct. 2014. doi:10.1109/ICICTA.2014.200
Abstract: The widespread use of the Internet proposes great challenges for information security. Routing protocols are used to distribute network topology information among routers. Routers find the best route according to the topology information, and realize the network data forwarding. Without correct router information, the network packet transmission is inefficient or incorrect, which may even cause network paralyzed. Therefore, secure routing protocol is an import factor to ensure network security. This paper emphasizes the security of OSPF routing protocol. We firstly outline the background of OSPF technology, and present the OSPF security analysis, including authentication mechanism, reliable flooding mechanism and hierarchical routing mechanism. Then the vulnerabilities of OSPF and protecting methods are introduced. In addition, we propose the attacks detection system of OSPF routing protocol from the perspective of the OSPF security, in order to detect attacks without affecting the network operation. Lastly, future research on OSPF routing protocol are concluded.
Keywords: Internet; cryptographic protocols; routing protocols; telecommunication network topology; telecommunication security; Internet; OSPF routing protocol; OSPF technology; attacks detection system; authentication mechanism; flooding mechanism; hierarchical routing mechanism; information security; secure routing protocol; security analysis; topology information; Authentication; Cryptography; Floods; Routing; Routing protocols; Data Packet; Key; Link-state; OSPF; Routing Protocol (ID#: 15-6678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7003663&isnumber=7003462


Ben Hadjy Youssef, N.; El Hadj Youssef, W.; Machhout, M.; Tourki, R.; Torki, K., “Instruction Set Extensions of AES Algorithms for 32-Bit Processors,” Security Technology (ICCST), 2014 International Carnahan Conference on, vol., no., pp. 1, 5, 13-16 Oct. 2014. doi:10.1109/CCST.2014.6986988
Abstract: Embedded processors are an integral part of many communications devices such as mobile phones, secure access to private networks, electronic commerce and smart cards. However, such devices often provide critical functions that could be sabotaged by malicious entities. The supply of security for data exchange on basis of embedded systems is a very important objection to accomplish. This paper focuses on instruction set extensions of symmetric key algorithm. The main contribution of this work is the extension of SPARC V8 LEON2 processor core with cryptographic Instruction Set Extensions. The proposed cryptographic algorithm is Advanced Encryption Standard (AES). Our customized instructions offer a cryptographic solution for embedded devices, in order to ensure communications security. Furthermore, as embedded systems are extremely resource constrained devices in terms of computing capabilities, power and memory area; these technological challenges are respected. Our extended LEON2 SPARC V8 core with cryptographic ISE is implemented using Xilinx XC5VFX70t FPGA device and an ASIC CMOS 40 nm technology. The total area of the resulting Chip is about 0.28 mm2 and can achieve an operating frequency of 3.33 GHz. The estimated power consumption of the chip was 13.3 mW at 10 MHz. Hardware cost and power consumption evaluation are provided for different clock frequencies, the achieved results show that our circuit is able to be arranged in many security domains such as embedded services routers, real-time multimedia applications and smartcard.
Keywords: CMOS logic circuits; application specific integrated circuits; cryptography; electronic data interchange; embedded systems; field programmable gate arrays; instruction sets; microprocessor chips; 32-bit processors; AES algorithms; ASIC CMOS technology; SPARC V8 LEON2 processor core; Xilinx XC5VFX70t FPGA device; communication devices; cryptographic ISE; cryptographic instruction set extension; data exchange security; embedded devices; embedded processors; embedded services routers; malicious entities; operating frequency; power consumption evaluation; real-time multimedia applications; resource constrained devices; size 40 nm; smartcard; symmetric key algorithm; word length 32 bit; Encryption; Field programmable gate arrays; Hardware; Program processors; Registers; Standards; AES; Embedded processor; FPGA and ASIC implementation; LEON2; decryption; encryption (ID#: 15-6679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6986988&isnumber=6986962


Owezarski, P., “Unsupervised Classification and Characterization of Honeypot Attacks,” Network and Service Management (CNSM), 2014 10th International Conference on, vol., no., pp. 10, 18, 17-21 Nov. 2014. doi:10.1109/CNSM.2014.7014136
Abstract: Monitoring communication networks and their traffic is of essential importance for estimating the risk in the Internet, and therefore designing suited protection systems for computer networks. Network and traffic analysis can be done thanks to measurement devices or honeypots. However, analyzing the huge amount of gathered data, and characterizing the anomalies and attacks contained in these traces remain complex and time consuming tasks, done by network and security experts using poorly automatized tools, and are consequently slow and costly. In this paper, we present an unsupervised method for classification and characterization of security related anomalies and attacks occurring in honeypots. This as automatized as possible method does not need any attack signature database, learning phase, or labeled traffic. This corresponds to a major step towards autonomous security systems. This paper also shows how it is possible from anomalies characterization results to infer filtering rules that could serve for automatically configuring network routers, switches or firewalls.
Keywords: computer network security; pattern classification; telecommunication network routing; telecommunication traffic; unsupervised learning; Internet; autonomous security systems; communication network monitoring; computer network protection systems; firewalls; honeypot attacks; network routers; switches; traffic analysis; unsupervised anomaly characterization; Algorithm design and analysis; Clustering algorithms; Correlation; IP networks; Internet; Partitioning algorithms; Security; Anomaly classification; Honeypot attack detection (ID#: 15-6680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014136&isnumber=7014126


Tsikoudis, N.; Papadogiannakis, A.; Markatos, E.P., “LEoNIDS: a Low-latency and Energy-efficient Network-level Intrusion Detection System,” Emerging Topics in Computing, IEEE Transactions on, vol., no. 99, pp.1,1, 05 December 2014. doi:10.1109/TETC.2014.2369958
Abstract: Over the past decade, design and implementation of low-power systems has received significant attention. Started with data centers and battery-operated mobile devices, it has recently branched to core network devices such as routers. However, this emerging need for low-power system design has not been studied for security systems, which are becoming increasingly important today. Towards this direction, we aim to reduce the power consumption of Network-level Intrusion Detection Systems (NIDS), which are used to improve the secure operation of modern computer networks. Unfortunately, traditional approaches to low-power system design, such as frequency scaling lead to a disproportionate increase in packet processing and queuing times. In this work, we show that this increase has a negative impact on the detection latency and impedes a timely reaction. To address this issue, we present LEoNIDS: an architecture that resolves the energy-latency tradeoff by providing both low power consumption and low detection latency at the same time. The key idea is to identify the packets that are more likely to carry an attack and give them higher priority so as to achieve low attack detection latency. Our results indicate that LEoNIDS consumes comparable power to a state-of-the-art low-power design, while, at the same time, achieving up to an order of magnitude faster attack detection.
Keywords: Computer architecture; Delays; Mobile handsets; Power demand; Program processors; Security; Time-frequency analysis; Energy-Efficient Systems; Intrusion Detection Systems; Low Latency; Low-Power design; Multi-Core Packet Processing; Network Security; Performance (ID#: 15-6681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6977945&isnumber=6558478


Renukuntla, S.S.B.; Rawat, S., “Optimization of Excerpt Query Process for Packet Attribution System,” Information Assurance and Security (IAS), 2014 10th International Conference on, vol., no., pp. 41, 46, 28-30 Nov. 2014. doi:10.1109/ISIAS.2014.7064618
Abstract: Internet and its applications have increased to an enormous extent in the past decade. As the usage increased, it has also exposed its users to various security threats. Network forensic techniques can be used to traceback the source and the path of an attack that can be used as a legal evidence in a court of law. Packet attribution techniques like Source Path Isolation (SPIE), Block Bloom Filter (BBF), and Hierarchical Bloom Filter (HBF) are proposed to store the packet data into the bloom filters at each router present in the network. All the routers in the Autonomous System (AS) are queried for presence of excerpt in their bloom filters to traceback source and path of attack. Upon receiving the excerpt query, each router search their bloom filters for presence of excerpt and send the result to NMS. NMS receives the response from routers and determines the traceback path from victim to source of attack. In this process, all the routers are engaged in searching the bloom filters, causing possible delay in performing actual routing tasks. This degrades network performance and may adversely affect QoS of network. To address potential performance issues, in this paper, we propose query optimization techniques, reducing the number of routers to be searched to a great extent, without adversely affecting storage and processing requirements as compared to existing attribution methods.
Keywords: Internet; computer network security; data structures; digital forensics; optimisation; quality of service; query processing; telecommunication network routing; AS; Internet security; NMS; QoS; autonomous system; bloom filters; excerpt query process optimization; network forensic technique; packet attribution system; packet data store; routing task; source traceback; Hafnium; IP networks; Excerpt Query; Hash-based traceback; Packet Attribution System; Payload Attribution System (ID#: 15-6682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7064618&isnumber=7064614


Tennekoon, R.; Wijekoon, J.; Harahap, E.; Nishi, H.; Saito, E.; Katsura, S., “Per Hop Data Encryption Protocol for Transmission of Motion Control Data over Public Networks,” Advanced Motion Control (AMC), 2014 IEEE 13th International Workshop on, vol., no., pp. 128, 133, 14-16 March 2014. doi:10.1109/AMC.2014.6823269
Abstract: Bilateral controllers are widely used vital technology to perform remote operations and telesurgeries. The nature of the bilateral controller enables control objects, which are geographically far from the operation location. Therefore, the control data has to travel through public networks. As a result, to maintain the effectiveness and the consistency of applications such as teleoperations and telesurgeries, faster data delivery and data integrity are essential. The Service-oriented Router (SoR) was introduced to maintain the rich information on the Internet and to achieve maximum benefit from networks. In particular, the security, privacy and integrity of bilateral communication are not discoursed in spite of its significance brought by its underlying skill information or personal vital information. An SoR can analyze all packet or network stream transactions on its interfaces and store them in high throughput databases. In this paper, we introduce a hop-by-hop routing protocol which provides hop-by-hop data encryption using functions of the SoR. This infrastructure can provide security, privacy and integrity by using these functions. Furthermore, we present the implementations of proposed system in the ns-3 simulator and the test result shows that in a given scenario, the protocol only takes a processing delay of 46.32 μs for the encryption and decryption processes per a packet.
Keywords: Internet; computer network security; control engineering computing; cryptographic protocols; data communication; data integrity; data privacy; force control; medical robotics; motion control; position control; routing protocols; surgery; telecontrol; telemedicine; telerobotics; SoR; bilateral communication; bilateral controller; control objects; data delivery; decryption process; hop-by-hop data encryption; hop-by-hop routing protocol; motion control data transmission; network stream transaction analysis; ns-3 simulator; operation location; packet analysis; per hop data encryption protocol; personal vital information; privacy; processing delay; public network; remote operation; security; service-oriented router; skill information; teleoperation; telesurgery; throughput database; Delays; Encryption; Haptic interfaces; Routing protocols; Surgery; Bilateral Controllers; Service-oriented Router; hop-by-hop routing; motion control over networks; ns-3 (ID#: 15-6683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823269&isnumber=6823244


Jin Hai; Wang Hai-Lan, “Research of the Router Scheme for Virtual Storage,” Intelligent Computation Technology and Automation (ICICTA), 2014 7th International Conference on, vol., no., pp. 484, 487, 25-26 Oct. 2014. doi:10.1109/ICICTA.2014.123
Abstract: With the development of storage technology, the increase of the storage quantity and diverse forms, virtual storage has become a most adaptable technology in the current environment. It helps users to provide interactive procedure in the heterogeneous environment, maintain the operating system continuity, simplify the storage management complexity and reduce the storage cost. Virtual storage is to centralized manage multiple storage devices and provide large capacity, high data transmission performance for users. It can be realized in three levels: virtual storage based on hosts, networks and storage devices. Virtual storage based on networks can be further divided into virtualization based on net devices, switches and routers. This paper focus on the router scheme in for virtual storage, which has higher performance and better security compared with other method. Virtualization based on routers is to integrate virtual modules into the routers, giving storage routers in the network have both exchange functions of the switches and protocol conversion functions of the routers. It makes full use of the current storage resources and protecting user investment, and also allows users in the Ethernet connect to the virtual storage pool, which can use different protocol channels at the same time. As the focus study at present, virtual storage can be used in several fields, such as data mirroring, data replication, tapes backup enhancement devices, real-time duplication, real-time data recovery and application integration.
Keywords: data communication; local area networks; protocols; virtual storage; virtualisation; Ethernet; multiple storage devices; protocol channels; protocol conversion functions; storage cost reduction; storage management complexity; storage resources; storage router scheme; switches conversion functions; virtual storage; virtualization; Operating systems; Routing protocols; Servers; Storage area networks; Storage management; Virtualization; Data Storage; Router Scheme; Storage Devices; Storage Management; Virtual Storage (ID#: 15-6684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7003586&isnumber=7003462


Ahmed, A.S.; Hassan, R.; Othman, N.E., “Security Threats for IPv6 Transition Strategies: A Review,” Engineering Technology and Technopreneuship (ICE2T), 2014 4th International Conference on, vol., no., pp. 83, 88, 27-29 Aug. 2014. doi:10.1109/ICE2T.2014.7006224
Abstract: There is a growing perception among communications experts that IPv6 and its associated protocols is set to soon replace the current IP version. This is somewhat interesting given that general adoption of IPv6 has been slow. Perhaps this can be explained by the short-term fixes to IPv4 address including classless addressing and NAT. Because of these short-term solutions in addition that IPv4 is not capable to manage the growth of information systems, particularly the growth of internet technologies and services including cloud computing, mobile IP, IP telephony, and IP-capable mobile telephony, all of which necessitate the use of IPv6. There is however a realization that the transformation must be gradual and properly guided and managed. To this end, the Internet Engineering Task Force (IETF) was formed to assist in the transition from IPv4 to IPv6 Dual Stack, Header Translation and Tunneling. The mechanisms employed in this transition consist of changes to protocol mechanisms affecting hosts and routers, addressing and deployment, that are designed to avoid mishap and facilitate a smooth transition from IPv4 to IPv6. Given the inevitability of adopting IPv6, this paper focuses on a detailed examination of the transition techniques and its associated benefits and possible shortcomings. Furthermore, the security threats for each transition technique are overviewed.
Keywords: Internet; information systems; security of data; transport protocols; IETF; IP-capable mobile telephony; IPv4; IPv6 transition strategy; Internet engineering task force; NAT; classless addressing; cloud computing; dual stack; header translation; information system; internet technology; mobile IP; protocol mechanism; security threat; tunneling; Encapsulation; Firewalls (computing); IP networks; Internet; Protocols; Dual Stack; IPv4; IPv6; Translation; Tunneling (ID#: 15-6685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006224&isnumber=7006204


Guneysu, T.; Regazzoni, F.; Sasdrich, P.; Wojcik, M., “THOR — The Hardware Onion Router,” Field Programmable Logic and Applications (FPL), 2014 24th International Conference on, vol., no., pp. 1, 4, 2-4 Sept. 2014. doi:10.1109/FPL.2014.6927408
Abstract: Security and privacy of data traversing internet have always been a major concern for all users. In this context, The Onion Routing (Tor) is the most successful protocol to anonymize global Internet traffic and is widely deployed as software on many personal computers or servers. In this paper, we explore the potential of modern reconfigurable devices to efficiently realize the Tor protocol on embedded devices. In particular, this targets the acceleration of the complex cryptographic operations involved in the handshake of routing nodes and the data stream encryption. Our hardware-based implementation on the Xilinx Zynq platform outperforms previous embedded solutions by more than a factor of 9 with respect to the cryptographic handshake — ultimately enabling quite inexpensive but highly efficient routers. Hence, we consider our work as a further milestone towards the development and the dissemination of low-cost and high performance onion relays that hopefully ultimately leads again to a more private Internet.
Keywords: Internet; computer network security; cryptographic protocols; data privacy; embedded systems; routing protocols; system-on-chip; telecommunication traffic; SoC; THOR; Tor protocol; Xilinx Zynq platform; complex cryptographic operations; cryptographic handshake; data stream encryption; embedded devices; global Internet traffic; hardware onion router; hardware-based implementation; modern reconfigurable devices; onion routing protocol; routing nodes handshake; security; Computer architecture; Encryption; Hardware; Protocols; Relays; Software (ID#: 15-6686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927408&isnumber=6927322


Sapio, A.; Baldi, M.; Liao, Y.; Ranjan, G.; Risso, F.; Tongaonkar, A.; Torres, R.; Nucci, A., “MAPPER: A Mobile Application Personal Policy Enforcement Router for Enterprise Networks,” Software Defined Networks (EWSDN), 2014 Third European Workshop on, vol., no., pp. 131, 132, 1-3 Sept. 2014. doi:10.1109/EWSDN.2014.9
Abstract: MAPPER is a system for enforcing user-specific policies based on the availability of access nodes that support the capability to dynamically load and execute processing modules on the data path. This work leverages a network access node that, after authenticating a connecting user, loads a set of lightweight virtual machines that process traffic terminated on the user device to implement articulated user-specific access policies. Specifically, we demonstrate how a man-in-the-middle-proxy module, dynamically and opportunistically combined with a module capable of mobile application identification, can implement complex access policies. The man-in-the-middle-proxy module enables MAPPER policies to be applied to both clear and HTTPS traffic, while an intelligent traffic classification system, provides support for policies based on over 250,000 mobile apps spanning both Android and iOS platforms.
Keywords: Android (operating system); business communication; iOS (operating system); mobile computing; security of data; telecommunication network routing; virtual machines; Android platform; MAPPER; articulated user specific access policies; complex access policies; enterprise networks; iOS platform; intelligent traffic classification system; lightweight virtual machines; mobile application personal policy enforcement router; network access node; user authentication; userspecific policies; Conferences; Europe; Mobile communication; Mobile computing; Monitoring; Smart phones; Virtual machining; apps; network; policy; virtualization (ID#: 15-6687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984070&isnumber=6984033


Ganegedara, T.; Weirong Jiang; Prasanna, V.K., “A Scalable and Modular Architecture for High-Performance Packet Classification,” Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no.5, pp. 1135, 1144, May 2014. doi:10.1109/TPDS.2013.261
Abstract: Packet classification is widely used as a core function for various applications in network infrastructure. With increasing demands in throughput, performing wire-speed packet classification has become challenging. Also the performance of today's packet classification solutions depends on the characteristics of rulesets. In this work, we propose a novel modular Bit-Vector (BV) based architecture to perform high-speed packet classification on Field Programmable Gate Array (FPGA). We introduce an algorithm named StrideBV and modularize the BV architecture to achieve better scalability than traditional BV methods. Further, we incorporate range search in our architecture to eliminate ruleset expansion caused by range-to-prefix conversion. The post place-and-route results of our implementation on a state-of-the-art FPGA show that the proposed architecture is able to operate at 100+ Gbps for minimum size packets while supporting large rulesets up to 28 K rules using only the on-chip memory resources. Our solution is ruleset-feature independent, i.e. the above performance can be guaranteed for any ruleset regardless the composition of the ruleset.
Keywords: field programmable gate arrays; packet switching; FPGA; core function; field programmable gate array; high performance packet classification solutions; high speed packet classification; modular architecture; modular bit vector; network infrastructure; on-chip memory resources; range-to-prefix conversion; ruleset expansion; ruleset-feature independent; scalable architecture; wire speed packet classification; Arrays; Field programmable gate arrays; Hardware; Memory management; Pipelines; Throughput; Vectors; ASIC; FPGA; Packet classification; firewall; hardware architectures; network security; networking; router (ID#: 15-6688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627892&isnumber=6786006


Fadlallah, A., “Adaptive Probabilistic Packet Marking Scheme for IP Traceback,” Computer Applications and Information Systems (WCCAIS), 2014 World Congress on, vol., no., pp. 1, 5, 17-19 Jan. 2014. doi:10.1109/WCCAIS.2014.6916548
Abstract: IP Traceback is a fundamental mechanism in defending against cyber-attacks in particular the denial of service (DoS) attacks. Many schemes have been proposed in the literature; in particular, Probabilistic Packet Marking (PPM) schemes were in the center of the researchers’ attention given their scalability and thus their ability to trace distributed attacks such as distributed denial of service attacks (DDoS). A major issue in PPM-based schemes is the fixed marking probability, which reduces the probability of getting marked packets from routers far away from the victim given that their marked packets have a higher probability to be re-marked by routers near the victim. This increases the number of packets required to reconstruct the attack path. In this paper, we propose a simple, yet efficient solution for this issue by letting the routers adapt their marking probability based on the number of packets they have previously re-marked. We compare our scheme to the original PPM through extensive simulations. The results clearly show the improvement brought by our proposed marking scheme.
Keywords: IP networks; computer network security; probability; DDoS attacks; IP traceback; PPM schemes; PPM-based schemes; adaptive probabilistic packet marking scheme; cyber-attacks; distributed denial of service attacks; marking probability; Computers; Filtering theory; Internet; Probabilistic logic; Radiation detectors; Simulation; Denial of Service attacks; IP traceback; Probabilistic Packet Marking (ID#: 15-6689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916548&isnumber=6916540


Liang Chen, “Secure Network Coding for Wireless Routing,” Communications (ICC), 2014 IEEE International Conference on, vol., no., pp. 1941, 1946, 10-14 June 2014. doi:10.1109/ICC.2014.6883607
Abstract: Nowadays networking is secure because we encrypt the confidential messages with the underlying assumption that adversaries in the network are computationally bounded. For traditional routing or network coding, routers know the contents of the packets they receive. Networking is not secure any more if there are eavesdroppers with infinite computational power at routers. Our concern is whether we can achieve stronger security at routers. This paper proposes secure network coding for wireless routing. Combining channel coding and network coding, this scheme can not only provide physical layer security at wireless routers but also forward data error-free at a high rate. In the paper we prove this scheme can be applied to general networks for secure wireless routing.
Keywords: channel coding; telecommunication network routing; channel coding; forward data error-free; physical layer security; secure network coding; secure wireless routing; Communication system security; Network coding; Protocols; Relays; Routing; Security; Throughput; information-theoretic secrecy; network coding; network information theory; wireless routing (ID#: 15-6690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883607&isnumber=6883277


Krishnan, R.; Krishnaswamy, D.; Mcdysan, D., “Behavioral Security Threat Detection Strategies for Data Center Switches and Routers,” Distributed Computing Systems Workshops (ICDCSW), 2014 IEEE 34th International Conference on, vol., no., pp.  82, 87, June 30 2014–July 3 2014. doi:10.1109/ICDCSW.2014.19
Abstract: Behavioral security threats such as Distributed Denial of Service (DDoS) attacks are an ongoing problem in large scale Data Centers (DC) and pose huge performance challenges to DC operators. Typically, a dedicated Firewall/DDoS appliance is needed for Layer 2-7 behavioral security threat detection and mitigation. This solution is cost prohibitive for large scale multi-tenant DCs with high throughput performance needs. This paper examines various Layer 2-4 behavioral security threat detection methods and assists which are implement able in the switches and routers at low cost. For DCs, this complements the overall behavioral security threat detection strategy and enables operators to offer tiered services. Extensions to emerging NFV and SDN scenarios are also discussed.
Keywords: computer centres; computer network security; DC; DDoS attack; behavioral security threat detection strategy; data center routers; data center switches; distributed denial-of-service attack; firewall; high throughput performance needs; software defined networking; Bandwidth; Computer crime; Home appliances; IP networks; Image edge detection; Servers; Data Center; Distributed Denial of Service; NFV; SDN; Security; Threat Detection (ID#: 15-6691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888844&isnumber=6888817


Abirami, R.; Premalatha, G., “Depletion of Vampire Attacks in Medium Access Control Level Using Interior Gateway Routing Protocol,” Information Communication and Embedded Systems (ICICES), 2014 International Conference on, vol., no., pp. 1,5, 27-28 Feb. 2014. doi:10.1109/ICICES.2014.7033801
Abstract: A wireless sensor network is a group of network nodes which collaborate with each other in a sophisticated fashion. It is built of nodes from a few to several hundreds or even thousands, where each node is connected to one (or sometimes several) sensors. In WSN, Second layer of the OSI reference layer is a data link layer which has a sub layer of Medium Access Control. The choice of Medium Access Control (MAC) protocol has a direct bearing on the reliability and efficiency of network transmissions due to errors and interferences in wireless communications and to other challenges. They are primarily responsible for regulating access to the shared medium. There are a lot of protocols developed to protect from DOS attack, but it is not completely possible. One such DOS attack is vampire attacks which cause damage in network. Secure level is low; productivity reduces which leads to environmental disasters and cause loss in the information. Routing protocols play an important role in modern wireless communication networks. Hence propose Interior Gateway Routing Protocol (IGRP) where router used it to exchange routing data within an independent system. In WSN routing protocols find the route between nodes and ensure the consistent communication between the nodes in the network.
Keywords: access protocols; computer network security; routing protocols; wireless sensor networks; DOS attack; IGRP; MAC protocol; OSI reference layer; WSN routing protocols; data link layer; interior gateway routing protocol; medium access control level; medium access control protocol; network transmissions; routing data exchange; vampire attack depletion; wireless communication networks; wireless sensor network; Computer crime; Logic gates; Routing; Routing protocols; Sensors; Wireless sensor networks; DOS attack; IGRP; MAC; WSN (ID#: 15-6692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033801&isnumber=7033740


Fengjiao Li; Luyong Zhang; Dianjun Chen, “Vulnerability Mining of Cisco Router Based on Fuzzing,” Systems and Informatics (ICSAI), 2014 2nd International Conference on, vol., no., pp. 649, 653, 15-17 Nov. 2014. doi:10.1109/ICSAI.2014.7009366
Abstract: Router security analysis plays a vital role in maintaining network security. However, IOS, which runs in Cisco routers, has been proved carrying serious security risks. And in order to improve security, we need to conduct vulnerability mining on IOS. Currently, Fuzzing, as a simple and effective automated test technology, is widely used in vulnerability discovery. In this paper, we introduce a novel testing framework for Cisco routers. Based on this framework, we first generate test cases with Semi-valid Fuzzing Test Cases Generator (SFTCG), which considerably improves the test effectiveness and code coverage. After that, we develop a new Fuzzer based on SFTCG and then emulate Cisco router in Dynamips, which makes it easy to interact with GDB or IDA Pro for debugging. In order to supervise the Target, we employ a Monitor Module to check the status of the router regularly. Finally, through the experiment on ICMP protocol in IOS, we find the released vulnerabilities of Ping of Death and Denial of Service, which demonstrates the effectiveness of our proposed Fuzzer.
Keywords: computer network security; routing protocols; transport protocols; Cisco router mining; Denial of Service; GDB; ICMP protocol; IDA; IOS; SFTCG; dynamip; internet control message protocol; monitor module; network security; router security risk analysis; semivalid fuzzing test case generator; target supervision; Communication networks; Debugging; Monitoring; Routing protocols; Security; Testing; Cisco IOS; Fuzzing; SFTCG; Vulnerability (ID#: 15-6693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009366&isnumber=7009247


Kekai Hu; Wolf, T.; Teixeira, T.; Tessier, R., “System-Level Security for Network Processors with Hardware Monitors,” Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, vol., no., pp. 1, 6, 1-5 June 2014. doi: (not provided)
Abstract: New attacks are emerging that target the Internet infrastructure. Modern routers use programmable network processors that may be exploited by merely sending suitably crafted data packets into a network. Hardware monitors that are co-located with processor cores can detect attacks that change processor behavior with high probability. In this paper, we present a solution to the problem of secure, dynamic installation of hardware monitoring graphs on these devices. We also address the problem of how to overcome the homogeneity of a network with many identical devices, where a successful attack, albeit possible only with small probability, may have devastating effects.
Keywords: computer network management; computer network security; cryptography; multiprocessing systems; Internet infrastructure; data packets; dynamic installation; hardware monitoring graphs; hardware monitors; modern routers; processor behavior; processor cores; programmable network processors; Hardware; Monitoring; Program processors; Prototypes; Public key (ID#: 15-6694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881538&isnumber=6881325


Biswas, J.; Gupta, A.; Singh, D., “WADP: A Wormhole Attack Detection and Prevention Technique in MANET Using Modified AODV Routing Protocol,” Industrial and Information Systems (ICIIS), 2014 9th International Conference on, vol., no., pp. 1, 6,
15-17 Dec. 2014. doi:10.1109/ICIINFS.2014.7036535
Abstract: Mobile Ad hoc Networks (MANETs) are prone to a variety of attacks due to their unique characteristics like dynamic topology, open wireless medium, absence of infrastructure, multi hop nature and resource constraints. A node in MANET acts not only as an end terminal but both as router and client. In this way, multi-hop communication occurs in MANETs and thus it becomes much more difficult task to establish a secure path between the source and destination. The objective of this work is to overcome a special kind of attack called wormhole attack launched by at least two colluding nodes within a network. In this research paper work, some modifications has been done in AODV routing protocol to detect and remove wormhole attack in real-world MANET. Wormhole attack detection and prevention algorithm, WADP, has been implemented in modified AODV. Also node authentication has been used to detect malicious nodes and remove false positive problem that may arise in WADP algorithm. Node authentication not only removes false positive but also helps in mapping exact location of wormhole and is a kind of double verification for wormhole attack detection. Simulation results prove the theory.
Keywords: invasive software; mobile ad hoc networks; routing protocols; telecommunication network topology; telecommunication security; AODV routing protocol; MANET; WADP algorithm; dynamic topology; multihop communication; node authentication; open wireless medium; wormhole attack detection and prevention technique; Authentication; Delays; IP networks; Mobile ad hoc networks; Monitoring; Routing protocols; Synchronization; attack modes; modified AODV; wireless ad hoc network; wormhole nodes (ID#: 15-6695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036535&isnumber=7036459


Shankar, S.S.; Lin PinXing; Herkersdorf, A., “Deep Packet Inspection in Residential Gateways and Routers: Issues and Challenges,” Integrated Circuits (ISIC), 2014 14th International Symposium on, vol., no., pp. 560, 563, 10-12 Dec. 2014. doi:10.1109/ISICIR.2014.7029481
Abstract: Several industry trends and new applications have brought the residential gateway router (RGR) to the center of digital home with direct connectivity to the service provider’s network. Increasing risks of network attacks have necessitated the need for deep packet inspection in network processor (NP) used by RGR to match traffic at multiple gigabit throughput. Traditional deep packet inspection (DPI) implementations primarily focus on end hosts like servers, personal / handheld computers. Existing DPI signature matching techniques cannot be directly implemented in RGR due to various issues and challenges pertaining to processing capacity of the NP and associated memory constraints. So 4 key factors, regular expression support, gigabit throughput, scalability and ease of signature updates has been proposed through which best signature matching system could be designed for efficient DPI implementation in RGR.
Keywords: computer network security; digital signatures; internetworking; telecommunication network routing; telecommunication traffic; DPI implementation; DPI signature matching techniques; NP processing capacity; RGR; deep-packet inspection; digital home; ease-of-signature update factor; gigabit throughput factor; memory constraints; network attack risks; network processor; network traffic; regular expression support factor; residential gateway router; scalability factor; service provider network; Algorithm design and analysis; Automata; Inspection; Memory management; Pattern matching; Software; Throughput; Deep Packet Inspection; Network Security; Regular Expressions; Residential Gateway and Router (ID#: 15-6696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029481&isnumber=7029433
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Software Assurance, 2014, Part 1

 

 
SoS Logo

Software Assurance, 2014

Part 1


Software assurance is an essential element in the development of scalable and composable systems.  For a complete system to be secure, each subassembly must be secure. The research work cited here was presented in 2014.



Konrad Iwanicki, Przemyslaw Horban, Piotr Glazar, Karol Strzelecki; “Bringing Modern Unit Testing Techniques to Sensornets,” ACM Transactions on Sensor Networks (TOSN), Volume 11, Issue 2, August 2014, Article No. 25. doi:10.1145/2629422
Abstract: Unit testing, an important facet of software quality assurance, is underappreciated by wireless sensor network (sensornet) developers. This is likely because our tools lag behind the rest of the computing field. As a remedy, we present a new framework that enables modern unit testing techniques in sensornets. Although the framework takes a holistic approach to unit testing, its novelty lies mainly in two aspects. First, to boost test development, it introduces embedded mock modules that automatically abstract out dependencies of tested code. Second, to automate test assessment, it provides embedded code coverage tools that identify untested control flow paths in the code. We demonstrate that in sensornets these features pose unique problems, solving which requires dedicated support from the compiler and operating system. However, the solutions have the potential to offer substantial benefits. In particular, they reduce the unit test development effort by a few factors compared to existing solutions. At the same time, they facilitate obtaining full code coverage, compared to merely 57–72% that can be achieved with integration tests. They also allow for intercepting and reporting many classes of runtime failures, thereby simplifying the diagnosis of software flaws. Finally, they enable fine-grained management of the quality of sensornet software.
Keywords: Unit testing, code coverage, embedded systems, mock objects, software quality assurance, wireless sensor networks (ID#: 15-6236)
URL:  http://doi.acm.org/10.1145/2629422


Peter C. Rigby, Daniel M. German, Laura Cowen, Margaret-Anne Storey; “Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory,” ACM Transactions on Software Engineering and Methodology (TOSEM) - Special Issue International Conference on Software Engineering (ICSE 2012) and Regular Papers, Volume 23, Issue 4, August 2014, Article No. 35. doi:10.1145/2594458
Abstract: Peer review is seen as an important quality-assurance mechanism in both industrial development and the open-source software (OSS) community. The techniques for performing inspections have been well studied in industry; in OSS development, software peer reviews are not as well understood.  To develop an empirical understanding of OSS peer review, we examine the review policies of 25 OSS projects and study the archival records of six large, mature, successful OSS projects. We extract a series of measures based on those used in traditional inspection experiments. We measure the frequency of review, the size of the contribution under review, the level of participation during review, the experience and expertise of the individuals involved in the review, the review interval, and the number of issues discussed during review. We create statistical models of the review efficiency, review interval, and effectiveness, the issues discussed during review, to determine which measures have the largest impact on review efficacy.  We find that OSS peer reviews are conducted asynchronously by empowered experts who focus on changes that are in their area of expertise. Reviewers provide timely, regular feedback on small changes. The descriptive statistics clearly show that OSS review is drastically different from traditional inspection.
Keywords: Peer review, inspection, mining software repositories, open-source software (ID#: 15-6237)
URL:  http://doi.acm.org/10.1145/2594458


Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz; “A Methodology for Exposing Risk in Achieving Emergent System Properties,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 3, May 2014,  Article No. 22. doi:10.1145/2560048
Abstract: Determining whether systems achieve desired emergent properties, such as safety or reliability, requires an analysis of the system as a whole, often in later development stages when changes are difficult and costly to implement. In this article we propose the Process Risk Indicator (PRI) methodology for analyzing and evaluating emergent properties early in the development cycle. A fundamental assumption of system engineering is that risk mitigation processes reduce system risks, yet these processes may also be a source of risk: (1) processes may not be appropriate for achieving the desired emergent property; or (2) processes may not be followed appropriately. PRI analyzes development process artifacts (e.g., designs pertaining to reliability or safety analysis reports) to quantify process risks that may lead to higher system risk. We applied PRI to the hazard analysis processes of a network-centric, Department of Defense system-of-systems and two NASA spaceflight projects to assess the risk of not achieving one such emergent property, software safety, during the early stages of the development lifecycle. The PRI methodology was used to create measurement baselines for process indicators of software safety risk, to identify risks in the hazard analysis process, and to provide feedback to projects for reducing these risks.
Keywords: Process risk, risk measurement, software safety (ID#: 15-6238)
URL:  http://doi.acm.org/10.1145/2560048


Pingyu Zhang, Sebastian Elbaum; “Amplifying Tests to Validate Exception Handling Code: An Extended Study in the Mobile Application Domain,” ACM Transactions on Software Engineering and Methodology (TOSEM) - Special Issue International Conference on Software Engineering (ICSE 2012) and Regular Papers, Volume 23, Issue 4, August 2014, Article No. 32. doi:10.1145/2652483
Abstract: Validating code handling exceptional behavior is difficult, particularly when dealing with external resources that may be noisy and unreliable, as it requires (1) systematic exploration of the space of exceptions that may be thrown by the external resources, and (2) setup of the context to trigger specific patterns of exceptions. In this work, we first present a study quantifying the magnitude of the problem by inspecting the bug repositories of a set of popular applications in the increasingly relevant domain of Android mobile applications. The study revealed that 22% of the confirmed and fixed bugs have to do with poor exceptional handling code, and half of those correspond to interactions with external resources. We then present an approach that addresses this challenge by performing an systematic amplification of the program space explored by a test by manipulating the behavior of external resources. Each amplification attempts to expose a program’s exception handling constructs to new behavior by mocking an external resource so that it returns normally or throws an exception following a predefined set of patterns. Our assessment of the approach indicates that it can be fully automated, is powerful enough to detect 67% of the faults reported in the bug reports of this kind, and is precise enough that 78% of the detected anomalies are fixed, and it has a great potential to assist developers.
Keywords: Test transformation, exception handling, mobile applications, test amplification, test case generation (ID#: 15-6239)
URL: http://doi.acm.org/10.1145/2652483


Salah Bouktif, Houari Sahraoui, Faheem Ahmed; “Predicting Stability of Open-Source Software Systems Using Combination of Bayesian Classifiers,” ACM Transactions on Management Information Systems (TMIS); Volume 5 Issue 1, April 2014,  Article No. 3. doi:10.1145/2555596
Abstract: The use of free and Open-Source Software (OSS) systems is gaining momentum. Organizations are also now adopting OSS, despite some reservations, particularly about the quality issues. Stability of software is one of the main features in software quality management that needs to be understood and accurately predicted. It deals with the impact resulting from software changes and argues that stable components lead to a cost-effective software evolution. Changes are most common phenomena present in OSS in comparison to proprietary software. This makes OSS system evolution a rich context to study and predict stability. Our objective in this work is to build stability prediction models that are not only accurate but also interpretable, that is, able to explain the link between the architectural aspects of a software component and its stability behavior in the context of OSS. Therefore, we propose a new approach based on classifiers combination capable of preserving prediction interpretability. Our approach is classifier-structure dependent. Therefore, we propose a particular solution for combining Bayesian classifiers in order to derive a more accurate composite classifier that preserves interpretability. This solution is implemented using a genetic algorithm and applied in the context of an OSS large-scale system, namely the standard Java API. The empirical results show that our approach outperforms state-of-the-art approaches from both machine learning and software engineering.
Keywords: Bayesian classifiers, Software stability prediction, genetic algorithm (ID#: 15-6240)
URL:  http://doi.acm.org/10.1145/2555596


Yuming Zhou, Baowen Xu, Hareton Leung, Lin Chen; “An In-Depth Study of the Potentially Confounding Effect of Class Size in Fault Prediction,” ACM Transactions on Software Engineering and Methodology (TOSEM); Volume 23, Issue 1, February 2014, Article No. 10. doi:10.1145/2556777
Abstract: Background. The extent of the potentially confounding effect of class size in the fault prediction context is not clear, nor is the method to remove the potentially confounding effect, or the influence of this removal on the performance of fault-proneness prediction models. Objective. We aim to provide an in-depth understanding of the effect of class size on the true associations between object-oriented metrics and fault-proneness. Method. We first employ statistical methods to examine the extent of the potentially confounding effect of class size in the fault prediction context. After that, we propose a linear regression-based method to remove the potentially confounding effect. Finally, we empirically investigate whether this removal could improve the prediction performance of fault-proneness prediction models. Results. Based on open-source software systems, we found: (a) the confounding effect of class size on the associations between object-oriented metrics and fault-proneness in general exists; (b) the proposed linear regression-based method can effectively remove the confounding effect; and (c) after removing the confounding effect, the prediction performance of fault prediction models with respect to both ranking and classification can in general be significantly improved. Conclusion. We should remove the confounding effect of class size when building fault prediction models.
Keywords: Metrics, class size, confounding effect, fault, prediction (ID#: 15-6241)
URL:  http://doi.acm.org/10.1145/2556777


Lionel Briand, Davide Falessi, Shiva Nejati, Mehrdad Sabetzadeh, Tao Yue; “Traceability and SysML Design Slices to Support Safety Inspections: A Controlled Experiment,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014,  Article No. 9. doi:10.1145/2559978
Abstract: Certifying safety-critical software and ensuring its safety requires checking the conformance between safety requirements and design. Increasingly, the development of safety-critical software relies on modeling, and the System Modeling Language (SysML) is now commonly used in many industry sectors. Inspecting safety conformance by comparing design models against safety requirements requires safety inspectors to browse through large models and is consequently time consuming and error-prone. To address this, we have devised a mechanism to establish traceability between (functional) safety requirements and SysML design models to extract design slices (model fragments) that filter out irrelevant details but keep enough context information for the slices to be easy to inspect and understand. In this article, we report on a controlled experiment assessing the impact of the traceability and slicing mechanism on inspectors' conformance decisions and effort. Results show a significant decrease in effort and an increase in decisions’ correctness and level of certainty.
Keywords: Empirical software engineering, design, requirements specification, software and system safety, software/program verification (ID#: 15-6242)
URL: http://doi.acm.org/10.1145/2559978 


Yong Ge, Guofei Jiang, Min Ding, Hui Xiong; “Ranking Metric Anomaly in Invariant Networks,” ACM Transactions on Knowledge Discovery from Data (TKDD), Volume 8, Issue 2, June 2014, Article No. 8. doi:10.1145/2601436
Abstract: The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A traditional approach to model monitoring data is to discover invariant relationships among the monitoring data. Indeed, we can discover all invariant relationships among all pairs of monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, since system faults usually propagate among the monitoring data and eventually lead to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. To this end, in this article, we propose the problem of ranking metrics according to the anomaly levels for a given invariant network, while this is a nontrivial task due to the uncertainties and the complex nature of invariant networks. Specifically, we propose two types of algorithms for ranking metric anomaly by link analysis in invariant networks. Along this line, we first define two measurements to quantify the anomaly level of each metric, and introduce the mRank algorithm. Also, we provide a weighted score mechanism and develop the gRank algorithm, which involves an iterative process to obtain a score to measure the anomaly levels. In addition, some extended algorithms based on mRank and gRank algorithms are developed by taking into account the probability of being broken as well as noisy links. Finally, we validate all the proposed algorithms on a large number of real-world and synthetic data sets to illustrate the effectiveness and efficiency of different algorithms.
Keywords: Metric anomaly ranking, invariant networks, link analysis (ID#: 15-6243)
URL:   http://doi.acm.org/10.1145/2601436

 
Hwidong Na, Jong-Hyeok Lee; “Linguistic Analysis of Non-ITG Word Reordering Between Language Pairs with Different Word Order Typologies,” ACM Transactions on Asian Language Information Processing (TALIP), Volume 13, Issue 3, September 2014, Article No. 11. doi:10.1145/2644810
Abstract: The Inversion Transduction Grammar (ITG) constraints have been widely used for word reordering in machine translation studies. They are, however, so restricted that some types of word reordering cannot be handled properly. We analyze three corpora between SVO and SOV languages: Chinese-Korean, English-Japanese, and English-Korean. In our analysis, sentences that require non-ITG word reordering are manually categorized. We also report the results for two quantitative measures that reveal the significance of non-ITG word reordering. In conclusion, we suggest that ITG constraints are insufficient to deal with word reordering in real situations.
Keywords: Machine translation, corpus analysis, inversion transduction grammar (ID#: 15-6244)
URL:  http://doi.acm.org/10.1145/2644810


Klaas-Jan Stol, Paris Avgeriou, Muhammad Ali Babar, Yan Lucas, Brian Fitzgerald; “Key Factors for Adopting Inner Source,” ACM Transactions on Software Engineering and Methodology (TOSEM),
Volume 23 Issue 2, March 2014, Article No. 18. doi:10.1145/2533685
Abstract: A number of organizations have adopted Open Source Software (OSS) development practices to support or augment their software development processes, a phenomenon frequently referred to as Inner Source. However the adoption of Inner Source is not a straightforward issue. Many organizations are struggling with the question of whether Inner Source is an appropriate approach to software development for them in the first place. This article presents a framework derived from the literature on Inner Source, which identifies nine important factors that need to be considered when implementing Inner Source. The framework can be used as a probing instrument to assess an organization on these nine factors so as to gain an understanding of whether or not Inner Source is suitable. We applied the framework in three case studies at Philips Healthcare, Neopost Technologies, and Rolls-Royce, which are all large organizations that have either adopted Inner Source or were planning to do so. Based on the results presented in this article, we outline directions for future research.
Keywords: Case study, framework, inner source, open-source development practices (ID#: 15-6245)
URL:  http://doi.acm.org/10.1145/2533685

 
M. Unterkalmsteiner, R. Feldt, T. Gorschek; “A Taxonomy for Requirements Engineering and Software Test Alignment,” ACM Transactions on Software Engineering and Methodology (TOSEM) , Volume 23, Issue 2, March 2014, Article No. 16. doi:10.1145/2523088
Abstract: Requirements engineering and software testing are mature areas and have seen a lot of research. Nevertheless, their interactions have been sparsely explored beyond the concept of traceability. To fill this gap, we propose a definition of requirements engineering and software test (REST) alignment, a taxonomy that characterizes the methods linking the respective areas, and a process to assess alignment. The taxonomy can support researchers to identify new opportunities for investigation, as well as practitioners to compare alignment methods and evaluate alignment, or lack thereof. We constructed the REST taxonomy by analyzing alignment methods published in literature, iteratively validating the emerging dimensions. The resulting concept of an information dyad characterizes the exchange of information required for any alignment to take place. We demonstrate use of the taxonomy by applying it on five in-depth cases and illustrate angles of analysis on a set of thirteen alignment methods. In addition, we developed an assessment framework (REST-bench), applied it in an industrial assessment, and showed that it, with a low effort, can identify opportunities to improve REST alignment. Although we expect that the taxonomy can be further refined, we believe that the information dyad is a valid and useful construct to understand alignment.
Keywords: Alignment, software process assessment, software testing, taxonomy (ID#: 15-6246)
URL:  http://doi.acm.org/10.1145/2523088


Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci; “Model-Based Synthesis of Control Software from System-Level Formal Specifications,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014, Article No. 6. doi:10.1145/2559934
Abstract: Many embedded systems are indeed software-based control systems, that is, control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on formal model-based design approaches for automatic synthesis of embedded systems control software. We present an algorithm, along with a tool QKS implementing it, that from a formal model (as a discrete-time linear hybrid system) of the controlled system (plant), implementation specifications (that is, number of bits in the Analog-to-Digital, AD, conversion) and system-level formal specifications (that is, safety and liveness requirements for the closed loop system) returns correct-by-construction control software that has a Worst-Case Execution Time (WCET) linear in the number of AD bits and meets the given specifications. We show feasibility of our approach by presenting experimental results on using it to synthesize control software for a buck DC-DC converter, a widely used mixed-mode analog circuit, and for the inverted pendulum.
Keywords: Hybrid systems, correct-by-construction control software synthesis, model-based design of control software (ID#: 15-6247)
URL:  http://doi.acm.org/10.1145/2559934


Dilan Sahin, Marouane Kessentini, Slim Bechikh, Kalyanmoy Deb; “Code-Smell Detection as a Bilevel Problem,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24, Issue 1, September 2014, Article No. 6. doi:10.1145/2675067
Abstract: Code smells represent design situations that can affect the maintenance and evolution of software. They make the system difficult to evolve. Code smells are detected, in general, using quality metrics that represent some symptoms. However, the selection of suitable quality metrics is challenging due to the absence of consensus in identifying some code smells based on a set of symptoms and also the high calibration effort in determining manually the threshold value for each metric. In this article, we propose treating the generation of code-smell detection rules as a bilevel optimization problem. Bilevel optimization problems represent a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, only the optimal solutions to the lower-level problem become possible feasible candidates to the upper-level problem. In this sense, the code-smell detection problem can be treated as a bilevel optimization problem, but due to lack of suitable solution techniques, it has been attempted to be solved as a single-level optimization problem in the past. In our adaptation here, the upper-level problem generates a set of detection rules, a combination of quality metrics, which maximizes the coverage of the base of code-smell examples and artificial code smells generated by the lower level. The lower level maximizes the number of generated artificial code smells that cannot be detected by the rules produced by the upper level. The main advantage of our bilevel formulation is that the generation of detection rules is not limited to some code-smell examples identified manually by developers that are difficult to collect, but it allows the prediction of new code-smell behavior that is different from those of the base of examples. The statistical analysis of our experiments over 31 runs on nine open-source systems and one industrial project shows that seven types of code smells were detected with an average of more than 86% in terms of precision and recall. The results confirm the outperformance of our bilevel proposal compared to state-of-art code-smell detection techniques. The evaluation performed by software engineers also confirms the relevance of detected code smells to improve the quality of software systems.
Keywords: Search-based software engineering, code smells, software quality (ID#: 15-6248)
URL:  http://doi.acm.org/10.1145/2675067

 
Eric Yuan, Naeem Esfahani, Sam Malek; “A Systematic Survey of Self-Protecting Software Systems,” ACM Transactions on Autonomous and Adaptive Systems (TAAS), Volume 8 Issue 4, January 2014, Article No. 17. doi:10.1145/2555611
Abstract: Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.
Keywords: Self-protection, adaptive security, autonomic computing, self-* properties, self-adaptive systems (ID#: 15-6249)
URL:  http://doi.acm.org/10.1145/2555611


Juan De Lara, Esther Guerra, Jesús Sánchez Cuadrado; “When and How to Use Multilevel Modelling,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24 Issue 2, December 2014, Article No. 12. doi:10.1145/2685615
Abstract: Model-Driven Engineering (MDE) promotes models as the primary artefacts in the software development process, from which code for the final application is derived. Standard approaches to MDE (like those based on MOF or EMF) advocate a two-level metamodelling setting where Domain-Specific Modelling Languages (DSMLs) are defined through a metamodel that is instantiated to build models at the metalevel below.  Multilevel modelling (also called deep metamodelling) extends the standard approach to metamodelling by enabling modelling at an arbitrary number of metalevels, not necessarily two. Proposers of multilevel modelling claim this leads to simpler model descriptions in some situations, although its applicability has been scarcely evaluated. Thus, practitioners may find it difficult to discern when to use it and how to implement multilevel solutions in practice.  In this article, we discuss those situations where the use of multilevel modelling is beneficial, and identify recurring patterns and idioms. Moreover, in order to assess how often the identified patterns arise in practice, we have analysed a wide range of existing two-level DSMLs from different sources and domains, to detect when their elements could be rearranged in more than two metalevels. The results show this scenario is not uncommon, while in some application domains (like software architecture and enterprise/process modelling) pervasive, with a high average number of pattern occurrences per metamodel.
Keywords: Model-driven engineering, domain-specific modelling languages, metamodelling, metamodelling patterns, multilevel modeling (ID#: 15-6250)
URL: http://doi.acm.org/10.1145/2685615


Gerwin Klein, June Andronick, Kevin Elphinstone, Toby Murray, Thomas Sewell, Rafal Kolanski, Gernot Heiser; “Comprehensive Formal Verification of an OS Microkernel,” ACM Transactions on Computer Systems (TOCS), Volume 32 Issue 1, February 2014, Article No. 2. doi:10.1145/2560537
Abstract: We present an in-depth coverage of the comprehensive machine-checked formal verification of seL4, a general-purpose operating system microkernel.  We discuss the kernel design we used to make its verification tractable. We then describe the functional correctness proof of the kernel’s C implementation and we cover further steps that transform this result into a comprehensive formal verification of the kernel: a formally verified IPC fastpath, a proof that the binary code of the kernel correctly implements the C semantics, a proof of correct access-control enforcement, a proof of information-flow noninterference, a sound worst-case execution time analysis of the binary, and an automatic initialiser for user-level systems that connects kernel-level access-control enforcement with reasoning about system behaviour. We summarise these results and show how they integrate to form a coherent overall analysis, backed by machine-checked, end-to-end theorems. The seL4 microkernel is currently not just the only general-purpose operating system kernel that is fully formally verified to this degree. It is also the only example of formal proof of this scale that is kept current as the requirements, design and implementation of the system evolve over almost a decade. We report on our experience in maintaining this evolving formally verified code base.
Keywords: Isabelle/HOL, L4, microkernel, operating systems, seL4 (ID#: 15-6251)
URL:  http://doi.acm.org/10.1145/2560537


Kai Pan, Xintao Wu, Tao Xie; “Guided Test Generation for Database Applications via Synthesized Database Interactions,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 2, March 2014, Article No. 12. doi:10.1145/2491529
Abstract: Testing database applications typically requires the generation of tests consisting of both program inputs and database states. Recently, a testing technique called Dynamic Symbolic Execution (DSE) has been proposed to reduce manual effort in test generation for software applications. However, applying DSE to generate tests for database applications faces various technical challenges. For example, the database application under test needs to physically connect to the associated database, which may not be available for various reasons. The program inputs whose values are used to form the executed queries are not treated symbolically, posing difficulties for generating valid database states or appropriate database states for achieving high coverage of query-result-manipulation code. To address these challenges, in this article, we propose an approach called SynDB that synthesizes new database interactions to replace the original ones from the database application under test. In this way, we bridge various constraints within a database application: query-construction constraints, query constraints, database schema constraints, and query-result-manipulation constraints. We then apply a state-of-the-art DSE engine called Pex for .NET from Microsoft Research to generate both program inputs and database states. The evaluation results show that tests generated by our approach can achieve higher code coverage than existing test generation approaches for database applications.
Keywords: Automatic test generation, database application testing, dynamic symbolic execution, synthesized database interactions (ID#: 15-6252)
URL:  http://doi.acm.org/10.1145/2491529


Akshay Dua, Nirupama Bulusu, Wu-Chang Feng, Wen Hu; “Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems,ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13 Issue 5s, November 2014, Article No. 154. doi:10.1145/2629338
Abstract: Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.
Keywords: Trust, critical systems, crowd-sourced sensing, data integrity, embedded systems, mobile computing, security (ID#: 15-6253)
URL: http://doi.acm.org/10.1145/2629338


Robert M. Hierons; “Combining Centralised and Distributed Testing,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24 Issue 1, September 2014, Article No. 5. doi:10.1145/2661296
Abstract: Many systems interact with their environment at distributed interfaces (ports) and sometimes it is not possible to place synchronised local testers at the ports of the system under test (SUT). There are then two main approaches to testing: having independent local testers or a single centralised tester that interacts asynchronously with the SUT. The power of using independent testers has been captured using implementation relation dioco. In this article, we define implementation relation diococ for the centralised approach and prove that dioco and diococ are incomparable. This shows that the frameworks detect different types of faults and so we devise a hybrid framework and define an implementation relation diocos for this. We prove that the hybrid framework is more powerful than the distributed and centralised approaches. We then prove that the Oracle problem is NP-complete for diococ and diocos but can be solved in polynomial time if we place an upper bound on the number of ports. Finally, we consider the problem of deciding whether there is a test case that is guaranteed to force a finite state model into a particular state or to distinguish two states, proving that both problems are undecidable for the centralised and hybrid frameworks.
Keywords: Centralised testing, distributed testing, model-based testing (ID#: 15-6254)
URL: http://doi.acm.org/10.1145/2661296


Ming Xia, Yabo Dong, Wenyuan Xu, Xiangyang Li, Dongming Lu; “MC2: Multimode User-Centric Design of Wireless Sensor Networks for Long-Term Monitoring,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 3, April 2014, Article No. 52. doi:10.1145/2509856
Abstract: Real-world, long-running wireless sensor networks (WSNs) require intense user intervention in the development, hardware testing, deployment, and maintenance stages. A majority of network design is network centric and focuses primarily on network performance, for example, efficient sensing and reliable data delivery. Although several tools have been developed to assist debugging and fault diagnosis, it is yet to systematically examine the underlying heavy burden that users face throughout the lifetime of WSNs. In this article, we propose a general Multimode user-CentriC (MC2) framework that can, with simple user inputs, adjust itself to assist user operation and thus reduce the users’ burden at various stages. In particular, we have identified utilities that are essential at each stage and grouped them into modes. In each mode, only the corresponding utilities will be loaded, and modes can be easily switched using the customized MC2 sensor platform. As such, we reduce the runtime interference between various utilities and simplify their development as well as their debugging. We validated our MC2 software and the sensor platform in a long-lived microclimate monitoring system deployed at a wildland heritage site, Mogao Grottoes. In our current system, 241 sensor nodes have been deployed in 57 caves, and the network has been running for over five years. Our experimental validation shows that the MC2 framework shortens the time for network deployment and maintenance, and makes network maintenance doable by field experts (in our case, historians).
Keywords: MC2 framework, Wireless sensor networks, user-centric design (ID#: 15-6255)
URL: http://doi.acm.org/10.1145/2509856  


Lihua Huang, Sulin Ba, Xianghua Lu; “Building Online Trust in a Culture of Confucianism: The Impact of Process Flexibility and Perceived Control,” ACM Transactions on Management Information Systems (TMIS), Volume 5, Issue 1, April 2014, Article No. 4.  doi:10.1145/2576756
Abstract: The success of e-commerce companies in a Confucian cultural context takes more than advanced IT and process design that have proven successful in Western countries. The example of eBay’s failure in China indicates that earning the trust of Chinese consumers is essential to success, yet the process of building that trust requires something different from that in the Western culture. This article attempts to build a theoretical model to explore the relationship between the Confucian culture and online trust. We introduce two new constructs, namely process flexibility and perceived control, as particularly important factors in online trust formation in the Chinese cultural context. A survey was conducted to test the proposed theoretical model. This study offers a new explanation for online trust formation in the Confucian context. The findings of this article can provide guidance for companies hoping to successfully navigate the Chinese online market in the future.
Keywords: Confucianism, culture, e-commerce, online market, perceived control, process flexibility, trust (ID#: 15-6256)
URL:  http://doi.acm.org/10.1145/2576756


Amit Zoran, Roy Shilkrot, Suranga Nanyakkara, Joseph Paradiso; “The Hybrid Artisans: A Case Study in Smart Tools,” ACM Transactions on Computer-Human Interaction (TOCHI), Volume 21, Issue 3, June 2014, Article No. 15.  doi:10.1145/2617570
Abstract: We present an approach to combining digital fabrication and craft, demonstrating a hybrid interaction paradigm where human and machine work in synergy. The FreeD is a hand-held digital milling device, monitored by a computer while preserving the makers freedom to manipulate the work in many creative ways. Relying on a pre-designed 3D model, the computer gets into action only when the milling bit risks the objects integrity, preventing damage by slowing down the spindle speed, while the rest of the time it allows complete gestural freedom. We present the technology and explore several interaction methodologies for carving. In addition, we present a user study that reveals how synergetic cooperation between human and machine preserves the expressiveness of manual practice. This quality of the hybrid territory evolves into design personalization. We conclude on the creative potential of open-ended procedures within this hybrid interactive territory of manual smart tools and devices.
Keywords: (not provided) (ID#: 15-6257)
URL:  http://doi.acm.org/10.1145/2617570
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Software Assurance, 2014, Part 2

 

 
SoS Logo

Software Assurance, 2014

Part 2


Software assurance is an essential element in the development of scalable and composable systems. For a complete system to be secure, each subassembly must be secure. The research work cited here was presented in 2014.



Luis Angel D. Bathen, Nikil D. Dutt; “Embedded RAIDs-on-Chip for Bus-Based Chip-Multiprocessors,” ACM Transactions on Embedded Computing Systems (TECS) — Regular Papers, Volume 13, Issue 4, November 2014, Article No. 83. doi:10.1145/2533316
Abstract: The dual effects of larger die sizes and technology scaling, combined with aggressive voltage scaling for power reduction, increase the error rates for on-chip memories. Traditional on-chip memory reliability techniques (e.g., ECC) incur significant power and performance overheads. In this article, we propose a low-power-and-performance-overhead Embedded RAID (E-RAID) strategy and present Embedded RAIDs-on-Chip (E-RoC), a distributed dynamically managed reliable memory subsystem for bus-based Chip-Multiprocessors. E-RoC achieves reliability through redundancy by optimizing RAID-like policies tuned for on-chip distributed memories. We achieve on-chip reliability of memories through the use of Distributed Dynamic ScratchPad Allocatable Memories (DSPAMs) and their allocation policies. We exploit aggressive voltage scaling to reduce power consumption overheads due to parallel DSPAM accesses, and rely on the E-RoC Manager to automatically handle any resulting voltage-scaling-induced errors. We demonstrate how E-RAIDs can further enhance the fault tolerance of traditional memory reliability approaches by designing E-RAID levels that exploit ECC. Finally, we show the power and flexibility of the E-RoC concept by showing the benefits of having a heterogeneous E-RAID levels that fit each application’s needs (fault tolerance, power/energy, performance).  Our experimental results on CHStone/Mediabench II benchmarks show that our E-RAID levels converge to 100% error-free data rates much faster than traditional ECC approaches. Moreover, E-RAID levels that exploit ECC can guarantee 99.9% error-free data rates at ultra low Vdd on average, where as traditional ECC approaches were able to attain at most 99.1% error-free data rates. We observe an average of 22% dynamic power consumption increase by using traditional ECC approaches with respect to the baseline (non-voltage scaled SPMs), whereas our E-RAID levels are able to save dynamic power consumption by an average of 27% (w.r.t. the same non-voltage scaled SPMs baseline), while incurring worst-case 2% higher performance overheads than traditional ECC approaches. By voltage scaling the memories, we see that traditional ECC approaches are able to save static energy by 6.4% (average), where as our E-RAID approaches achieve 23.4% static energy savings (average). Finally, we observe that mixing E-RAID levels allows us to further reduce the dynamic power consumption by up to 55.5% at the cost of an average 5.6% increase in execution time over traditional approaches.
Keywords: Information assurance, chip-multiprocessors, embedded systems, policy, scratchpad memory, security, virtualization (ID#: 15-6258)
URL: http://doi.acm.org/10.1145/2533316


Haibo Zeng, Marco Di Natale, Qi Zhu; “Minimizing Stack and Communication Memory Usage in Real-Time Embedded Applications,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13, Issue 5s, November 2014, Article No. 149. doi:10.1145/2632160
Abstract: In the development of real-time embedded applications, especially those on systems-on-chip, an efficient use of RAM memory is as important as the effective scheduling of the computation resources. The protection of communication and state variables accessed by concurrent tasks must provide real-time schedulability guarantees while using the least amount of memory. Several schemes, including preemption thresholds, have been developed to improve schedulability and save stack space by selectively disabling preemption. However, the design synthesis problem is still open. In this article, we target the assignment of the scheduling parameters to minimize memory usage for systems of practical interest, including designs compliant with automotive standards. We propose algorithms either proven optimal or shown to improve on randomized optimization methods like simulated annealing.
Keywords: Preemption threshold scheduling, data synchronization mechanism, memory usage, stack requirement (ID#: 15-6259)
URL: http://doi.acm.org/10.1145/2632160 


Peng Li, Debin Gao, Michael K. Reiter; “StopWatch: A Cloud Architecture for Timing Channel Mitigation,” ACM Transactions on Information and System Security (TISSEC), Volume 17, Issue 2, November 2014, Article No. 8. doi:10.1145/2670940
Abstract: This article presents StopWatch, a system that defends against timing-based side-channel attacks that arise from coresidency of victims and attackers in infrastructure-as-a-service clouds. StopWatch triplicates each cloud-resident guest virtual machine (VM) and places replicas so that the three replicas of a guest VM are coresident with nonoverlapping sets of (replicas of) other VMs. StopWatch uses the timing of I/O events at a VM’s replicas collectively to determine the timings observed by each one or by an external observer, so that observable timing behaviors are similarly likely in the absence of any other individual, coresident VMs. We detail the design and implementation of StopWatch in Xen, evaluate the factors that influence its performance, demonstrate its advantages relative to alternative defenses against timing side channels with commodity hardware, and address the problem of placing VM replicas in a cloud under the constraints of StopWatch so as to still enable adequate cloud utilization.
Keywords: Timing channels, clouds, replication, side channels, virtualization (ID#: 15-6260)
URL: http://doi.acm.org/10.1145/2670940 


Wei Hu, Dejun Mu, Jason Oberg, Baolei Mao, Mohit Tiwari, Timothy Sherwood, Ryan Kastner; “Gate-Level Information Flow Tracking for Security Lattices,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 20, Issue 1, November 2014, Article No. 2. doi:10.1145/2676548
Abstract: High-assurance systems found in safety-critical infrastructures are facing steadily increasing cyber threats. These critical systems require rigorous guarantees in information flow security to prevent confidential information from leaking to an unclassified domain and the root of trust from being violated by an untrusted party. To enforce bit-tight information flow control, gate-level information flow tracking (GLIFT) has recently been proposed to precisely measure and manage all digital information flows in the underlying hardware, including implicit flows through hardware-specific timing channels. However, existing work in this realm either restricts to two-level security labels or essentially targets two-input primitive gates and several simple multilevel security lattices. This article provides a general way to expand the GLIFT method for multilevel security. Specifically, it formalizes tracking logic for an arbitrary Boolean gate under finite security lattices, presents a precise tracking logic generation method for eliminating false positives in GLIFT logic created in a constructive manner, and illustrates application scenarios of GLIFT for enforcing multilevel information flow security. Experimental results show various trade-offs in precision and performance of GLIFT logic created using different methods. It also reveals the area and performance overheads that should be expected when expanding GLIFT for multilevel security.
Keywords: High-assurance system, formal method, gate-level information flow tracking, hardware security, multilevel security, security lattice (ID#: 15-6261)
URL:  http://doi.acm.org/10.1145/2676548 


Huang-Ming Huang, Christopher Gill, Chenyang Lu; “Implementation and Evaluation of Mixed-Criticality Scheduling Approaches for Sporadic Tasks,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Real-Time and Embedded Technology and Applications, Domain-Specific Multicore Computing, Cross-Layer Dependable Embedded Systems, and Application of Concurrency to System Design (ACSD), Volume 13, Issue 4s, July 2014, Article No. 126. doi:10.1145/2584612
Abstract: Traditional fixed-priority scheduling analysis for periodic and sporadic task sets is based on the assumption that all tasks are equally critical to the correct operation of the system. Therefore, every task has to be schedulable under the chosen scheduling policy, and estimates of tasks’ worst-case execution times must be conservative in case a task runs longer than is usual. To address the significant underutilization of a system’s resources under normal operating conditions that can arise from these assumptions, several mixed-criticality scheduling approaches have been proposed. However, to date, there have been few quantitative comparisons of system schedulability or runtime overhead for the different approaches.  In this article, we present a side-by-side implementation and evaluation of the known mixed-criticality scheduling approaches, for periodic and sporadic mixed-criticality tasks on uniprocessor systems, under a mixed-criticality scheduling model that is common to all these approaches. To make a fair evaluation of mixed-criticality scheduling, we also address previously open issues and propose modifications to improve particular approaches. Our empirical evaluations demonstrate that user-space implementations of mechanisms to enforce different mixed-criticality scheduling approaches can be achieved atop Linux without kernel modification, with reasonably low (but in some cases nontrivial) overhead for mixed-criticality real-time task sets.
Keywords: Real-time systems, mixed-criticality scheduling (ID#: 15-6262)
URL:  http://doi.acm.org/10.1145/2584612 


Dionisio De Niz, Lutz Wrage, Anthony Rowe, Ragunathan (Raj) Rajkumar; “Utility-Based Resource Overbooking for Cyber-Physical Systems,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13, Issue 5s, November 2014, Article No. 162. doi:10.1145/2660497
Abstract: Traditional hard real-time scheduling algorithms require the use of the worst-case execution times to guarantee that deadlines will be met. Unfortunately, many algorithms with parameters derived from sensing the physical world suffer large variations in execution time, leading to pessimistic overall utilization, such as visual recognition tasks. In this article, we present ZS-QRAM, a scheduling approach that enables the use of flexible execution times and application-derived utility to tasks in order to maximize total system utility. In particular, we provide a detailed description of the algorithm, the formal proofs for its temporal protection, and a detailed, evaluation. Our evaluation uses the Utility Degradation Resilience (UDR) showing that ZS-QRAM is able to obtain 4× as much UDR as ZSRM, a previous overbooking approach, and almost 2× as much UDR as Rate-Monotonic with Period Transformation (RM/TP). We then evaluate a Linux kernel module implementation of our scheduler on an Unmanned Air Vehicle (UAV) platform. We show that, by using our approach, we are able to keep the tasks that render the most utility by degrading lower-utility ones even in the presence of highly dynamic execution times.
Keywords: Real-time scheduling, mixed-criticality systems, quality of service, unmanned aerial vehicles, utility functions (ID#: 15-6263)
URL:  http://doi.acm.org/10.1145/2660497 


William Enck, Peter Gilbert, Seungyeop Han, Vasant Tendulkar, Byung-Gon Chun, Landon P. Cox, Jaeyeon Jung, Patrick McDaniel, Anmol N. Sheth; “TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones,” ACM Transactions on Computer Systems (TOCS), Volume 32, Issue 2, June 2014,  Article No. 5. doi:10.1145/2619091
Abstract: Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32% performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
Keywords: Information-flow tracking, mobile apps, privacy monitoring, smartphones (ID#: 15-6264)
URL: http://doi.acm.org/10.1145/2619091 


Reinhard Schneider, Dip Goswami, Samarjit Chakraborty, Unmesh Bordoloi, Petru Eles, Zebo Peng; “Quantifying Notions of Extensibility in FlexRay Schedule Synthesis,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 19 Issue 4, August 2014, Article No. 32. doi:10.1145/2647954
Abstract: FlexRay has now become a well-established in-vehicle communication bus at most original equipment manufacturers (OEMs) such as BMW, Audi, and GM. Given the increasing cost of verification and the high degree of crosslinking between components in automotive architectures, an incremental design process is commonly followed. In order to incorporate FlexRay-based designs in such a process, the resulting schedules must be extensible, that is: (i) when messages are added in later iterations, they must preserve deadline guarantees of already scheduled messages, and (ii) they must accommodate as many new messages as possible without changes to existing schedules. Apart from extensible scheduling having not received much attention so far, traditional metrics used for quantifying them cannot be trivially adapted to FlexRay schedules. This is because they do not exploit specific properties of the FlexRay protocol. In this article we, for the first time, introduce new notions of extensibility for FlexRay that capture all the protocol-specific properties. In particular, we focus on the dynamic segment of FlexRay and we present a number of metrics to quantify extensible schedules. Based on the introduced metrics, we propose strategies to synthesize extensible schedules and compare the results of different scheduling algorithms. We demonstrate the applicability of the results with industrial-size case studies and also show that the proposed metrics may also be visually represented, thereby allowing for easy interpretation.
Keywords: FlexRay, automotive, extensibility, schedule synthesis (ID#: 15-6265)
URL: http://doi.acm.org/10.1145/2647954 


Bin Ren, Todd Mytkowicz, Gagan Agrawal; “A Portable Optimization Engine for Accelerating Irregular Data-Traversal Applications on SIMD Architectures,” ACM Transactions on Architecture and Code Optimization (TACO), Volume 11, Issue 2, June 2014, Article No. 16. doi:10.1145/2632215
Abstract: Fine-grained data parallelism is increasingly common in the form of longer vectors integrated with mainstream processors (SSE, AVX) and various GPU architectures. This article develops support for exploiting such data parallelism for a class of nonnumeric, nongraphic applications, which perform computations while traversing many independent, irregular data structures. We address this problem by developing several novel techniques. First, for code generation, we develop an intermediate language for specifying such traversals, followed by a runtime scheduler that maps traversals to various SIMD units. Second, we observe that good data locality is crucial to sustained performance from SIMD architectures, whereas many applications that operate on irregular data structures (e.g., trees and graphs) have poor data locality. To address this challenge, we develop a set of data layout optimizations that improve spatial locality for applications that traverse many irregular data structures. Unlike prior data layout optimizations, our approach incorporates a notion of both interthread and intrathread spatial reuse into data layout. Finally, we enable performance portability (i.e., the ability to automatically optimize applications for different architectures) by accurately modeling the impact of inter- and intrathread locality on program performance. As a consequence, our model can predict which data layout optimization to use on a wide variety of SIMD architectures.  To demonstrate the efficacy of our approach and optimizations, we first show how they enable up to a 12X speedup on one SIMD architecture for a set of real-world applications. To demonstrate that our approach enables performance portability, we show how our model predicts the optimal layout for applications across a diverse set of three real-world SIMD architectures, which offers as much as 45% speedup over a suboptimal solution.
Keywords: Irregular data structure, SIMD, fine-grained parallelism (ID#: 15-6266)
URL:  http://doi.acm.org/10.1145/2632215 


Ghassan O. Karame, Aurélien Francillon, Victor Budilivschi, Srdjan Čapkun, Vedran Čapkun; “Microcomputations as Micropayments in Web-based Services,” ACM Transactions on Internet Technology (TOIT), Volume 13, Issue 3, May 2014, Article No. 8. doi:10.1145/2611526
Abstract: In this article, we propose a new micropayment model for nonspecialized commodity web-services based on microcomputations. In our model, a user that wishes to access online content (offered by a website) does not need to register or pay to access the website; instead, he will accept to run microcomputations on behalf of the service provider in exchange for access to the content. These microcomputations can, for example, support ongoing computing projects that have clear social benefits (e.g., projects relating to medical research) or can contribute towards commercial computing projects. We analyze the security and privacy of our proposal and we show that it preserves the privacy of users. We argue that this micropayment model is economically and technically viable and that it can be integrated in existing distributed computing frameworks (e.g., the BOINC platform). In this respect, we implement a prototype of a system based on our model and we deploy our prototype on Amazon Mechanical Turk to evaluate its performance and usability given a large number of users. Our results show that our proposed scheme does not affect the browsing experience of users and is likely to be used by a non-trivial proportion of users. Finally, we empirically show that our scheme incurs comparable bandwidth and CPU consumption to the resource usage incurred by online advertisements featured in popular websites.
Keywords: Distributed computing, Monetization, microcomputations, micropayments, privacy (ID#: 15-6267)
URL:  http://doi.acm.org/10.1145/2611526 


David Basin, Cas Cremers; “Know Your Enemy: Compromising Adversaries in Protocol Analysis,” ACM Transactions on Information and System Security (TISSEC), Volume 17, Issue 2, November 2014,  Article No. 7. doi:10.1145/2658996
Abstract: We present a symbolic framework, based on a modular operational semantics, for formalizing different notions of compromise relevant for the design and analysis of cryptographic protocols. The framework’s rules can be combined to specify different adversary capabilities, capturing different practically-relevant notions of key and state compromise. The resulting adversary models generalize the models currently used in different domains, such as security models for authenticated key exchange. We extend an existing security-protocol analysis tool, Scyther, with our adversary models. This extension systematically supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of state-reveal queries. Furthermore, we introduce the concept of a protocol-security hierarchy, which classifies the relative strength of protocols against different adversaries.  In case studies, we use Scyther to analyse protocols and automatically construct protocol-security hierarchies in the context of our adversary models. Our analysis confirms known results and uncovers new attacks. Additionally, our hierarchies refine and correct relationships between protocols previously reported in the cryptographic literature.
Keywords: Security protocols, adversary models, automated analysis, threat models (ID#: 15-6268)
URL:  http://doi.acm.org/10.1145/2658996 


Songqing Chen, Lei Liu, Xinyuan Wang, Xinwen Zhang, Zhao Zhang; “A Host-Based Approach for Unknown Fast-Spreading Worm Detection and Containment,” ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special Section on Best Papers from SEAMS 2012, Volume 8, Issue 4, January 2014, Article No. 21. doi:10.1145/2555615
Abstract: The fast-spreading worm, which immediately propagates itself after a successful infection, is becoming one of the most serious threats to today’s networked information systems. In this article, we present WormTerminator, a host-based solution for fast Internet worm detection and containment with the assistance of virtual machine techniques based on the fast-worm defining characteristic. In WormTerminator, a virtual machine cloning the host OS runs in parallel to the host OS. Thus, the virtual machine has the same set of vulnerabilities as the host. Any outgoing traffic from the host is diverted through the virtual machine. If the outgoing traffic from the host is for fast worm propagation, the virtual machine should be infected and will exhibit worm propagation pattern very quickly because a fast-spreading worm will start to propagate as soon as it successfully infects a host. To prove the concept, we have implemented a prototype of WormTerminator and have examined its effectiveness against the real Internet worm Linux/Slapper. Our empirical results confirm that WormTerminator is able to completely contain worm propagation in real-time without blocking any non-worm traffic. The major performance cost of WormTerminator is a one-time delay to the start of each outgoing normal connection for worm detection. To reduce the performance overhead, caching is utilized, through which WormTerminator will delay no more than 6% normal outgoing traffic for such detection on average.
Keywords: WormTerminator, polymorphic worms, virtual machine, worm containment, zero-day worms (ID#: 15-6269)
URL:  http://doi.acm.org/10.1145/2555615 


Min Y. Mun, Donnie H. Kim, Katie Shilton, Deborah Estrin, Mark Hansen, Ramesh Govindan; “PDVLoc: A Personal Data Vault for Controlled Location Data Sharing,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 4, June 2014, Article No. 58. doi:10.1145/2523820
Abstract: Location-Based Mobile Service (LBMS) is one of the most popular smartphone services. LBMS enables people to more easily connect with each other and analyze the aspects of their lives. However, sharing location data can leak people’s privacy. We present PDVLoc, a controlled location data-sharing framework based on selectively sharing data through a Personal Data Vault (PDV). A PDV is a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing PDVLoc gives users flexible and granular access control over their location data. We have implemented a prototype of PDVLoc and evaluated it using real location-sharing social networking applications, Google Latitude and Foursquare. Our user study of 19 participants over 20 days shows that most users find that PDVLoc is useful to manage and control their location data, and are willing to continue using PDVLoc.
Keywords: Location-based mobile service, personal data vault, privacy, selective sharing, system (ID#: 15-6270)
URL: http://doi.acm.org/10.1145/2523820 


Wolf-Bastian Pöttner, Hans Seidel, James Brown, Utz Roedig, Lars Wolf; “Constructing Schedules for Time-Critical Data Delivery in Wireless Sensor Networks,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 3, April 2014, Article No. 44. doi:10.1145/2494528
Abstract: Wireless sensor networks for industrial process monitoring and control require highly reliable and timely data delivery. To match performance requirements, specialised schedule based medium access control (MAC) protocols are employed. In order to construct an efficient system, it is necessary to find a schedule that can support the given application requirements in terms of data delivery latency and reliability. Furthermore, additional requirements such as transmission power may have to be taken into account when constructing the schedule. In this article, we show how such schedule can be constructed. We describe methods and tools to collect the data necessary as input for schedule calculation. Moreover, due to the high complexity of schedule calculation, we also introduce a heuristic. We evaluate the proposed methods in a real-world process automation and control application deployed in an oil refinery and further present a long-term experiment in an office environment. Additionally, we discuss a framework for schedule life-cycle management.
Keywords: Reliability, WSAN, WSN, schedule, scheduling, timeliness, wireless sensor and actor network, wireless sensor network (ID#: 15-6271)
URL: http://doi.acm.org/10.1145/2494528 


Michael Sirivianos, Kyungbaek Kim, Jian Wei Gan, Xiaowei Yang; “Leveraging Social Feedback to Verify Online Identity Claims,” ACM Transactions on the Web (TWEB), Volume 8, Issue 2, March 2014, Article No. 9. doi:10.1145/2543711
Abstract: Anonymity is one of the main virtues of the Internet, as it protects privacy and enables users to express opinions more freely. However, anonymity hinders the assessment of the veracity of assertions that online users make about their identity attributes, such as age or profession. We propose FaceTrust, a system that uses online social networks to provide lightweight identity credentials while preserving a user’s anonymity. FaceTrust employs a “game with a purpose” design to elicit the opinions of the friends of a user about the user’s self-claimed identity attributes, and uses attack-resistant trust inference to assign veracity scores to identity attribute assertions. FaceTrust provides credentials, which a user can use to corroborate his assertions. We evaluate our proposal using a live Facebook deployment and simulations on a crawled social graph. The results show that our veracity scores are strongly correlated with the ground truth, even when dishonest users make up a large fraction of the social network and employ the Sybil attack.
Keywords: (ID#: 15-6272)
URL: http://doi.acm.org/10.1145/2543711  

 
Mingqiang Li, Patrick P. C. Lee; “STAIR Codes: A General Family of Erasure Codes for Tolerating Device and Sector Failures,” ACM Transactions on Storage (TOS) - Special Issue on Usenix Fast 2014, Volume 10, Issue 4, October 2014, Article No. 14. doi:10.1145/2658991
Abstract: Practical storage systems often adopt erasure codes to tolerate device failures and sector failures, both of which are prevalent in the field. However, traditional erasure codes employ device-level redundancy to protect against sector failures, and hence incur significant space overhead. Recent sector-disk (SD) codes are available only for limited configurations. By making a relaxed but practical assumption, we construct a general family of erasure codes called STAIR codes, which efficiently and provably tolerate both device and sector failures without any restriction on the size of a storage array and the numbers of tolerable device failures and sector failures. We propose the upstairs encoding and downstairs encoding methods, which provide complementary performance advantages for different configurations. We conduct extensive experiments on STAIR codes in terms of space saving, encoding/decoding speed, and update cost. We demonstrate that STAIR codes not only improve space efficiency over traditional erasure codes, but also provide better computational efficiency than SD codes based on our special code construction. Finally, we present analytical models that characterize the reliability of STAIR codes, and show that the support of a wider range of configurations by STAIR codes is critical for tolerating sector failure bursts discovered in the field.
Keywords: Erasure codes, device failures, reliability analysis, sector failures (ID#: 15-6273)
URL:  http://doi.acm.org/10.1145/2658991 


Chia-Heng Tu, Hui-Hsin Hsu, Jen-Hao Chen, Chun-Han Chen, Shih-Hao Hung; “Performance and Power Profiling for Emulated Android Systems,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 19, Issue 2, March 2014, Article No. 10. doi:10.1145/2566660
Abstract: Simulation is a common approach for assisting system design and optimization. For system-wide optimization, energy and computational resources are often the two most critical issues. Monitoring the energy state of each hardware component and measuring the time spent in each state is needed for accurate energy and performance prediction. For software optimization, it is important to profile the energy and the time consumed by each software construct in a realistic operating environment with a proper workload. However, the conventional approaches of simulation often fail to produce satisfying data. First, building a cycle-accurate simulation environment for a complex system, such as an Android smartphone, is difficult and can take a long time. Second, a slow simulation can significantly alter the behavior of multithreaded, I/O-intensive applications and can affect the accuracy of profiles. Third, existing software-based profilers generally do not work on simulators, which makes it difficult for performance analysis of complicated software, for example, Java applications executed by the Dalvik VM in an Android system.  To address these aforementioned problems, we proposed and prototyped a framework, called virtual performance analyzer (VPA). VPA takes advantage of an existing emulator or virtual machine monitor to reduce the complexity of building a simulator. VPA allows the user to selectively and incrementally integrate timing models and power models into the emulator with our carefully designed performance/power monitors, tracing facility, and profiling tools to evaluate and analyze the emulated system. The emulated system can perform at different levels of speed to help verify if the profile data are impacted by the emulation speed. Finally, VPA supports existing software-based profiles and enables non-intrusive tracing/profiling by minimizing the probe effect. Our experimental results show that the VPA framework allows users to quickly establish a performance/power evaluation environment and gather useful information to support system design and software optimization for Android smartphones.
Keywords: Android system emulation, full system emulation, performance profiling, performance tracing, power model, timing estimation (ID#: 15-6274)
URL:  http://doi.acm.org/10.1145/2566660 


Amit Chakrabarti, Graham Cormode, Andrew Mcgregor, Justin Thaler; “Annotations in Data Streams,” ACM Transactions on Algorithms (TALG), Volume 11, Issue 1, October 2014, Article No. 7. doi:10.1145/2636924
Abstract: The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms can be further reduced by enlisting a more powerful “helper” that can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend—including recent studies of multipass streaming, read/write streams, and randomly ordered streams—of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right.
Keywords: Interactive proof, annotations, data streams (ID#: 15-6275)
URL:  http://doi.acm.org/10.1145/2636924 


Andrew F. Tappenden, James Miller; “Automated Cookie Collection Testing,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014, Article No. 3. doi:10.1145/2559936
Abstract: Cookies are used by over 80% of Web applications utilizing dynamic Web application frameworks. Applications deploying cookies must be rigorously verified to ensure that the application is robust and secure. Given the intense time-to-market pressures faced by modern Web applications, testing strategies that are low cost and automatable are required. Automated Cookie Collection Testing (CCT) is presented, and is empirically demonstrated to be a low-cost and highly effective automated testing solution for modern Web applications. Automatable test oracles and evaluation metrics specifically designed for Web applications are presented, and are shown to be significant diagnostic tests. Automated CCT is shown to detect faults within five real-world Web applications. A case study of over 580 test results for a single application is presented demonstrating that automated CCT is an effective testing strategy. Moreover, CCT is found to detect security bugs in a Web application released into full production.
Keywords: Cookies, Web application testing, adaptive random testing, automated testing, software testing, test generation, test strategies (ID#: 15-6276)
URL:  http://doi.acm.org/10.1145/2559936 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Taint Analysis, 2014

 

 
SoS Logo

Taint Analysis

2014

 

Taint analysis is an important method for analyzing software to determine possible paths for exploitation. As such, it relates to the problems of composability and metrics. The work cited here was published in 2014.



Yadegari, B.; Debray, S., “Bit-Level Taint Analysis,” Source Code Analysis and Manipulation (SCAM), 2014 IEEE 14th International Working Conference on, vol., no., pp. 255, 264, 28-29 Sept. 2014. doi:10.1109/SCAM.2014.43
Abstract: Taint analysis has a wide variety of applications in software analysis, making the precision of taint analysis an important consideration. Current taint analysis algorithms, including previous work on bit-precise taint analyses, suffer from shortcomings that can lead to significant loss of precision (under/over tainting) in some situations. This paper discusses these limitations of existing taint analysis algorithms, shows how they can lead to imprecise taint propagation, and proposes a generalization of current bit-level taint analysis techniques to address these problems and improve their precision. Experiments using a deobfuscation tool indicate that our enhanced taint analysis algorithm leads to significant improvements in the quality of deobfuscation.
Keywords: data flow analysis; bit-level taint analysis; bit-precise taint analysis; deobfuscation tool; software analysis; taint analysis algorithms; taint propagation; Algorithm design and analysis; Data handling; Heuristic algorithms; Performance analysis; Registers; Semantics; Standards; Program Understanding; Reverse Engineering; Taint Analysis (ID#: 15-)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975659&isnumber=6975619 


Junhyoung Kim; TaeGuen Kim; Eul Gyu Im, “Survey of Dynamic Taint Analysis,” Network Infrastructure and Digital Content (IC-NIDC), 2014 4th IEEE International Conference on, vol., no., pp. 269, 272, 19-21 Sept. 2014. doi:10.1109/ICNIDC.2014.7000307
Abstract: Dynamic taint analysis (DTA) is to analyze execution paths that an attacker may use to exploit a system. Dynamic taint analysis is a method to analyze executable files by tracing information flow without source code. DTA marks certain inputs to program as tainted, and then propagates values operated with tainted inputs. Due to the increased popularity of dynamic taint analysis, there have been a few recent research approaches to provide a generalized tainting infrastructure. In this paper, we introduce some approaches of dynamic taint analysis, and analyze their approaches. Lam and Chiueh’s approach proposed a method that instruments code to perform taint marking and propagation. DYTAN considers three dimensions: taint source, propagation policies, taint sink. These dimensions make DYTAN to be more general framework for dynamic taint analysis. DTA++ proposes an idea to vanilla dynamic taint analysis that propagates additional taints along with targeted control dependencies. Control dependency causes results of taint analysis to have decreased accuracies. To improve accuracies, DTA++ showed that data transformation containing implicit flows should propagate properly to avoid under-tainting.
Keywords: data flow analysis; security of data; system monitoring; DTA++; DYTAN; attacker; control dependency; data transformation; dynamic taint analysis; executable files; execution paths; generalized tainting infrastructure; information flow tracing; propagation policies; taint marking; taint propagation; taint sink; taint source; Accuracy; Computer security; Instruments; Performance analysis; Software; Testing; dynamic taint analysis (ID#: 15-6636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000307&isnumber=7000253


Jinxin Ma; Puhan Zhang; Guowei Dong; Shuai Shao; Jiangxiao Zhang, “TWalker: An Efficient Taint Analysis Tool,” Information Assurance and Security (IAS), 2014 10th International Conference on, vol., no., pp. 18, 22, 28-30 Nov. 2014. doi:10.1109/ISIAS.2014.7064628
Abstract: The taint analysis method is usually effective for vulnerabilities detection. Existing works mostly care about the accuracy of taint propagation, not considering the time cost. We proposed a novel method to improve the efficiency of taint propagation with indices. Based our method, we have implemented TWalker, an effective vulnerabilities detection tool that enables easy data flow analysis of the real world programs, providing faster taint analysis than other existing works. TWalker has four properties: first, it works directly on the programs without source code; second, it monitors the program’s execution and records its necessary context; third, it delivers fine-grained taint analysis, providing fast taint propagation with indices; fourth, it could detect vulnerabilities effectively based on two security property rules. We have evaluated TWalker with several real world programs and compared it with a typical taint analysis tool. The experimental results show that our tool could perform taint propagation much faster than other tool, having better ability for vulnerabilities detection.
Keywords: data flow analysis; security of data; TWalker; efficient taint analysis tool; fine grained taint analysis; program execution; security property rules; taint propagation; vulnerabilities detection; Context; Indexes; Monitoring; Software; indices; security property; taint analysis; trace (ID#: 15-6637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7064628&isnumber=7064614


Lokhande, B.; Dhavale, S., “Overview of Information Flow Tracking Techniques Based on Taint Analysis for Android,” Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, vol., no., pp. 749, 753, 5-7 March 2014. doi:10.1109/IndiaCom.2014.6828062
Abstract: Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.
Keywords: Android (operating system); data flow analysis; smart phones; Android; Information leakage instances; data flow analysis technique; dynamic analysis; dynamic taint analysis approaches; exponential smartphone market growth; information flow tracking techniques; information leakage detection techniques; program code; program-run; static analysis techniques; static taint analysis approaches; Androids; Humanoid robots; Operating systems; Privacy; Real-time systems; Security; Smart phones; Android Operating System; Mobile Security; static and dynamic taint analysis (ID#: 15-6638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828062&isnumber=6827395


Gen Li; Ying Zhang; Shuang-xi Wang; Kai Lu, “Online Taint Propagation Analysis with Precise Pointer-to Analysis for Detecting Bugs in Binaries,” High Performance Computing and Communications, 2014 IEEE 6th Int’l Symposium on Cyberspace Safety and Security, 2014 IEEE 11th Int’l Conference on Embedded Software and Systems (HPCC,CSS,ICESS), 2014 IEEE International Conference on, vol., no., pp. 778, 784, 20-22 Aug. 2014. doi:10.1109/HPCC.2014.130
Abstract: Dynamic test generation approach is becoming increasingly popular to find security vulnerabilities in software, and is applied to detect bugs in binaries. However, the existing such systems adopt offline symbolic analysis and execution, based on program execution trace which includes the flow of execution instructions and the operand values, with all pointers or indirect memory access replaced by their execution values. And this yields two fatal problems: first, all symbolic information of pointers or indirect memory access is missing, secondly, the symbolic information of other variables is not accurate, especially for variables operated with pointers. We propose an approach, online taint propagation analysis for finding fatal bugs for pre-release software in binaries, and implement a systematic automatic dynamic test generation system, Hunter, for binary software testing. Our system implements accurate analysis by online taint propagation analysis and online byte-precise points-to analysis, thus online finding unknown high-priority fatal bugs that must be fixed immediately at a pre-release stage in binaries. The effectiveness of the techniques approach are both validated by revealing many fatal bugs in both benchmarks and large real world applications.
Keywords: program debugging; program testing; security of data; Hunter; automatic dynamic test generation system; binary software testing; bugs detection; dynamic test generation approach; online byte-precise point-to analysis; online taint propagation analysis; program execution trace; software security vulnerability; symbolic analysis; symbolic execution; Benchmark testing; Binary codes; Computer bugs; Layout; Security; Software; online byte-precise point-to analysis; symbolic taint analysis; taint-oriented online analysis (ID#: 15-6639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056832&isnumber=7056577


Zhang Puhan; Wu Jianxiong; Wang Xin; Zehui Wu, "Program Crash Analysis Based on Taint Analysis,” P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, vol., no., pp. 492, 498, 8-10 Nov. 2014. doi:10.1109/3PGCIC.2014.100
Abstract: Software exception analysis can not only improve software stability before putting into commercial, but also could optimize the priority of patch updates subsequently. We propose a more practical software exception analysis approach based on taint analysis, from the view that whether an exception of the software can be exploited by an attacker. It first identifies the type of exceptions, then do taint analysis on the trace that between the program entry point to exception point, and recording taint information of memory set and registers. It finally gives the result by integrating the above recording and the subsequent instructions analysis. We implement this approach to our exception analysis framework ExpTracer, and do the evaluation with some exploitable/un-exploitable exceptions which shows that our approach is more accurate in identifying exceptions compared with current tools.
Keywords: exception handling; program diagnostics; ExpTracer; exception point; exploitable exceptions; memory set; patch update priority optimization; program crash analysis; program entry point; registers; software exception analysis; software stability; subsequent instruction analysis; taint analysis; unexploitable exceptions; Algorithm design and analysis; Computer crashes; Instruments; Optimization; Personnel; Registers; Software; Software engineering; crash analysis; exception classification (ID#: 15-6640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024634&isnumber=7024297


Ping Wang; Wun Jie Chao; Kuo-Ming Chao; Chi-Chun Lo, “Using Taint Analysis for Threat Risk of Cloud Applications,” e-Business Engineering (ICEBE), 2014 IEEE 11th International Conference on, vol., no., pp. 185, 190, 5-7 Nov. 2014. doi:10.1109/ICEBE.2014.40
Abstract: Most existing approaches to developing cloud applications using threat analysis involve program vulnerability analyses for identifying the security holes associated with malware attacks. New malware attacks can bypass firewall-based detection by bypassing stack protection and by using Hypertext Transfer Protocol logging, kernel hacks, and library hack techniques, and to the cloud applications. In performing threat analysis for unspecified malware attacks, software engineers can use a taint analysis technique for tracking information flows between attack sources (malware) and detect vulnerabilities of targeted network applications. This paper proposes a threat risk analysis model incorporating an improved attack tree analysis scheme for solving the mobile security problem, in the model, Android programs perform taint checking to analyse the risks posed by suspicious applications. In probabilistic risk analysis, defence evaluation metrics are used for each attack path for assisting a defender simulate the attack results against malware attacks and estimate the impact losses. Finally, a case of threat analysis of a typical cyber security attack is presented to demonstrate the proposed approach.
Keywords: Android (operating system); firewalls; hypermedia; invasive software; mobile computing; program diagnostics; risk analysis; trees (mathematics); Android programs; attack sources; cloud applications; cyber security attack; defence evaluation metrics; firewall-based detection; hypertext transfer protocol logging; improved attack tree analysis scheme; information flow tracking; kernel hacks; library hack techniques; malware attacks; mobile security problem; probabilistic risk analysis; program vulnerability analysis; security holes; software engineers; stack protection; taint analysis technique; taint checking; threat analysis; threat risk analysis model; Analytical models; Malware; Measurement; Probabilistic logic; Risk analysis; Software; Attack defence tree; Cyber attacks; Taint checking; Threat; analysis (ID#: 15-6641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982078&isnumber=6982037


Meijian Li; Wang, Yongjun; Xie, Peidai; Zhijian Huang, “Reverse Analysis of Secure Communication Protocol Based on Taint Analysis,” Communications Security Conference (CSC 2014), 2014, vol., no., pp. 1, 8, 22-24 May 2014. doi:10.1049/cp.2014.0729
Abstract: To maintain communications confidentiality, security protocols are widely used in more and more network applications. Moreover, some malwares even leverage these kinds of protocols to evade inspection by IDS. Most security protocols are designed and verified by formalized methods; however, observation shows that protocol implementations commonly contain flaws or vulnerabilities. Therefore, research on reverse engineering of security protocols can play an important role in improving the security of network applications, especially by providing another way to fight against malwares. Nevertheless, previous protocol reverse engineering technologies, which are based on analysis of network traces, encounter great challenges when the network messages transmitted between different protocol principals are encrypted. This paper proposes a taint analysis based method, which aims to infer the message format from dynamic execution of security protocol applications. The proposed approach is based on the observation that the process of message parsing in cryptographic protocol applications reveals rich information about the hierarchical structures and semantics of their messages. Hence, by observing calls to library function and instruction execution in network programs, the proposed approach can reverse derive large amount of information about their protocol, such as message format and protocol model, even the communication is encrypted. Experiments show that the reverse analysis results not only accurately identify message fields, but also unveil the structure of the encrypted message fields.
Keywords: Dynamic-Binary-Analysis; Protocol-Format; Protocol-Reverse-Engineering; Security-Protocol; Taint-Analysis (ID#: 15-6642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992222&isnumber=6919880


Ki-Jin Eom; Choong-Hyun Choi; Joon-Young Paik; Eun-Sun Cho, “An Efficient Static Taint-Analysis Detecting Exploitable-Points on ARM Binaries,” Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on, vol., no., pp. 345, 346, 6-9 Oct. 2014. doi:10.1109/SRDS.2014.66
Abstract: This paper aims to differentiate benign vulnerabilities from those used by cyber-attacks, based on STA (Static TaintAnalysis.) To achieve this goal, the proposed STA determines if a crash is from severe vulnerabilities, after analyzing related exploitable-points in ARM binaries. We envision that the proposed analysis would reduce the complexity of analysis, by making use of CPA (Constant Propagation Analysis) and runtime information of crash points.
Keywords: program diagnostics; security of data; ARM binaries; CPA; STA; benign vulnerabilities; constant propagation analysis; cyber-attacks; exploitable-points detection; runtime information; static taint-analysis; Reliability; ARM binary; IDA Pro plug-in; crash point; data flow analysis; exploitable; reverse engineering; taint Analysis (ID#: 15-6643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983415&isnumber=6983362


Schutte, J.; Titze, D.; de Fuentes, J.M., “AppCaulk: Data Leak Prevention by Injecting Targeted Taint Tracking into Android Apps,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp. 370, 379, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.48
Abstract: As Android is entering the business domain, leaks of business-critical and personal information through apps become major threats. Due to the context-insensitive nature of the Android permission model, information flow policies cannot be enforced by on-board mechanisms. We therefore propose AppCaulk, an approach to harden any existing Android app by injecting a targeted dynamic taint analysis, which tracks and blocks unwanted information flows at runtime. Critical data flows are first discovered using a static taint analysis and the relevant data propagation paths are instrumented by a taint tracking code at register level. At runtime the dynamic taint analysis woven into the app detects and blocks data leaks as they are about to occur. In contrast to existing taint analysis approaches like Taint droid, AppCaulk does not require modification of the Android middleware and can thus be applied to any stock Android installation. In this paper, we explain the design of AppCaulk, describe the evaluation of its prototype, and compare its effectiveness with Taintdroid.
Keywords: Android (operating system); authorisation; middleware; Android apps; Android middleware; AppCaulk; Taintdroid; business domain; business-critical information leaks; context-insensitive Android permission model; critical data flows; data leak blockage; data leak detection; data leak prevention; data propagation paths; dynamic taint analysis; information flow blockage; information flow policies; information flow tracking; personal information leaks; register level; static taint analysis; stock Android installation; taint tracking code; targeted dynamic taint tracking analysis; Androids; Humanoid robots; Instruments; Middleware; Registers; Runtime; Smart phones; Android; information flow; instrumentation; taint analysis (ID#: 15-6644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011272&isnumber=7011202


Jun Cai; Shangfei Yang; Jinquan Men; Jun He, “Automatic Software Vulnerability Detection Based on Guided Deep Fuzzing,” Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, vol., no., pp. 231, 234, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933551
Abstract: Software security has become a very import part of information security in recent years. Fuzzing has proven successful in finding software vulnerabilities which are one major cause of information security incidents. However, the efficiency of traditional fuzz testing tools is usually very poor due to the blindness of test generation. In this paper, we present Sword, an automatic fuzzing system for software vulnerability detection, which combines fuzzing with symbolic execution and taint analysis techniques to tackle the above problem. Sword first uses symbolic execution to collect program execution paths and their corresponding constrains, then uses taint analysis to check these paths, the most dangerous paths which most likely lead to vulnerabilities will be further deep fuzzed. Thus, with the guidance of symbolic execution and taint analysis, Sword generates test cases most likely to trigger potential vulnerabilities lying deep in applications.
Keywords: program diagnostics; program testing; security of data; Sword; automatic fuzzing system; automatic software vulnerability detection; guided deep fuzzing; information security; software security; symbolic execution; taint analysis technique; Databases; Engines; Information security; Monitoring; Software; Software testing; fuzzing; software vulnerability detection; taint analysis (ID#: 15-6645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933551&isnumber=6933501


Wei Lin; Jinlong Fei; Yuefei Zhu; Xiaolong Shi, “A Method of Multiple Encryption and Sectional Encryption Protocol Reverse Engineering,” Computational Intelligence and Security (CIS), 2014 Tenth International Conference on, vol., no., pp. 420, 424, 15-16 Nov. 2014. doi:10.1109/CIS.2014.114
Abstract: Research on unknown network protocol reverse engineering is of great significance in many network security applications. Currently most of methods are limited in analyzing plain-text protocols, and a few of method can partly analyze the encryption protocol which is powerless for multiple encryption protocol or sectional encryption protocol. This paper proposes a method of encrypted protocol reverse engineering based on dynamic taint analysis. The method uses Pin to record executed instructions, and then conducts off-line analysis of the data dependencies to build two taint propagation graphs on instruction and function level, then recover the decryption process. The decrypted plaintext can be located due to the decryption process feature. And then, the format of protocol can be parsed. Experiments show that the method can accurately locate the decrypted protocol data of the multiple encryption and sectional encryption protocol, and restore the original format.
Keywords: computer network security; cryptographic protocols; reverse engineering; Pin; data dependencies; decryption process feature; dynamic taint analysis; encryption protocol reverse engineering; executed instructions; function level; instruction level; network security applications; offline analysis; plain-text protocols; plaintext decryption process; sectional encryption protocol; taint propagation graphs; unknown network protocol reverse engineering; Encryption; Flow graphs; Memory management; Protocols; Reverse engineering; decryption process recovering Introduction; multiple encryption; sectional encryption (ID#: 15-6646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016930&isnumber=7016831


Rawat, S.; Mounier, L.; Potet, M.-L., “LiSTT: An Investigation into Unsound-Incomplete Yet Practical Result Yielding Static Taintflow Analysis,” Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol., no., pp. 498, 505, 8-12 Sept. 2014. doi:10.1109/ARES.2014.74
Abstract: Vulnerability analysis is an important component of software assurance practices. One of its most challenging issues is to find software flaws that could be exploited by malicious users. A necessary condition is the existence of some tainted information flow between tainted input sources and vulnerable functions. Finding the existence of such a taint flow dynamically is an expensive and nondeterministic process. On the other hand, though static analysis may explore (theoretically) all the tainted paths, scalability is an issue, especially in the view of complete- and soundness. In this paper, we explore the possibilities of making static analysis scalable, by compromising its complete- and soundness properties and yet making it effective in detecting taint flows that lead to vulnerability exploitation. This technique is based on a combination of call graph slicing and data-flow analysis. A prototype tool has been developed, and we give experimental results showing that this approach is effective on large applications.
Keywords: data flow analysis; program testing; security of data; software fault tolerance; LiSTT; call graph slicing; complete properties; data-flow analysis; malicious users; security testing; software assurance practices; software flaws; soundness properties; static taintflow analysis; taint flows detection; tainted information flow; tainted input sources; tainted paths; vulnerability analysis; vulnerable functions; Binary codes; Complexity theory; Context; Scalability; Security; Software; Testing; Security testing (assurance); binary code; program chopping; static taint analysis (ID#: 15-6647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980324&isnumber=6980232


Gupta, M.K.; Govil, M.C.; Singh, G., “A Context-Sensitive Approach for Precise Detection of Cross-Site Scripting Vulnerabilities,” Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on, vol., no., pp. 7, 12, 9-11 Nov. 2014. doi:10.1109/INNOVATIONS.2014.6987553
Abstract: Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.
Keywords: Internet; hypermedia markup languages; invasive software; source code (software); Web application; XSS vulnerability; cross-site scripting vulnerability; defensive programming based HTML context-sensitive approach; financial transaction; health services; malicious operation; malicious user; malware; precise detection; sensitive information; social communication; source code; taint analysis; Browsers; Context; HTML; Security; Servers; Software; Standards; Cross-Site Scripting; Software Development Life Cycle; Taint Analysis; Vulnerability Detection; XSS Attacks (ID#: 15-6648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987553&isnumber=6985764


Short, A.; Feng Li, “Android Smartphone Third Party Advertising Library Data Leak Analysis,” Mobile Ad Hoc and Sensor Systems (MASS), 2014 IEEE 11th International Conference on, vol., no., pp. 749, 754, 28-30 Oct. 2014. doi:10.1109/MASS.2014.131
Abstract: Android has many security flaws that are being exploited by malicious developers. Common malware includes many different kinds of behaviors; from complicated root exploits — to simple private data leakage via explicit permissions. The security features that have been put in place by the Android developers have proven to be insufficient and incapable of preventing malware from proliferating through official and unofficial repositories. Private data leakage has become a popular topic because of the sheer number of applications that request the various permissions to access mobile device’s private data and an influx of general privacy concerns amongst the Android community.
Keywords: authorisation; data privacy; invasive software; smart phones; Android smartphone third party advertising library data leak analysis; malware; mobile device private data access; official repositories; private data leakage; security flaws; unofficial repositories; Advertising; Fingerprint recognition; Libraries; Malware; Smart phones; Testing; taint analysis; data leaks; advertising libraries; malware evasion; Droidbox (ID#: 15-6649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035776&isnumber=7035647


Peng Li; Guodong Li; Gopalakrishnan, G., “Practical Symbolic Race Checking of GPU Programs,” High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, vol., no., pp. 179, 190, 16-21 Nov. 2014. doi:10.1109/SC.2014.20
Abstract: Even the careful GPU programmer can inadvertently introduce data races while writing and optimizing code. Currently available GPU race checking methods fall short either in terms of their formal guarantees, ease of use, or practicality. Existing symbolic methods: (1) do not fully support existing CUDA kernels, (2) may require user-specified assertions or invariants, (3) often require users to guess which inputs may be safely made concrete, (4) tend to explode in complexity when the number of threads is increased, and (5) explode in the face of thread-ID based decisions, especially in a loop. We present SESA, a new tool combining Symbolic Execution and Static Analysis to analyze C++ CUDA programs that overcomes all these limitations. SESA also scales well to handle non-trivial benchmarks such as Parboil and Lonestar, and is the only tool of its class that handles such practical examples. This paper presents SESA’s methodological innovations and practical results.
Keywords: C++ language; graphics processing units; parallel architectures; program diagnostics; C++ CUDA program; CUDA kernel; GPU program; Lonestar; Parboil; SESA; static analysis; symbolic execution; symbolic race checking; thread-ID based decision; Concrete; Graphics processing units; History; Indexes; Instruction sets; Kernel; Schedules; CUDA; Data Flow Analysis; Formal Verification; GPU; Parallelism; Symbolic Execution; Taint Analysis; Virtual Machine (ID#: 15-6650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013002&isnumber=7012182


Gupta, M.K.; Govil, M.C.; Singh, G., “An Approach to Minimize False Positive in SQLI Vulnerabilities Detection Techniques Through Data Mining,” Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, vol., no., pp. 407, 410, 12-13 July 2014. doi:10.1109/ICSPCT.2014.6884962
Abstract: Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, the presence of security weaknesses in web applications allows malicious user’s to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) attacks exploit most dangerous security vulnerabilities in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on taint based vulnerability detection has been quite intensive in the past decade. However, these techniques are not free from false positive and false negative results. In this paper, we propose an approach to minimize false positive in SQLI vulnerability detection techniques using data mining concepts. We have implemented a prototype tool for PHP, MySQL technologies and evaluated it on six real world applications and NIST Benchmarks. Our evaluation and comparison results show that proposed technique detects SQLI vulnerabilities with low percentage of false positives.
Keywords: Internet; SQL; data mining; security of data; social networking (online); software reliability; Facebook; Google; MySQL technology; PHP; SQL injection attack; SQLI vulnerability detection techniques; Twitter; data mining; eBay; false positive minimization; financial transaction; health problem; social communications; taint based vulnerability detection; Computers; Software; SQLI attack; SQLI vulnerability; false positive; input validation; sanitization; taint analysis (ID#: 15-6651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884962&isnumber=6884878


Zhang Puhan; Wu Jianxiong; Wang Xin; Wu Zehui, “Decrypted Data Detection Algorithm Based on Dynamic Dataflow Analysis,” Computer, Information and Telecommunication Systems (CITS), 2014 International Conference on, vol., no., pp. 1, 4, 7-9 July 2014. doi:10.1109/CITS.2014.6878965
Abstract: Cryptographic algorithm detection has received a lot of attentions in these days, whereas the method to detect decrypted data remains further research. A decrypted memory detection method using dynamic dataflow analysis is proposed in this paper. Based on the intuition that decrypted data is generated in the cryptographic function and the unique feature of decrypted data, by analyzing the parameter sets of cryptographic function, we propose a model based on the input and output of cryptographic function. Experimental results demonstrate that our approach can effectively detect decrypted memory.
Keywords: cryptography; data flow analysis; cryptographic algorithm detection; decrypted data; decrypted memory detection method; dynamic dataflow analysis; Algorithm design and analysis; Encryption; Heuristic algorithms; Software; Software algorithms; Cryptographic; Dataflow analysis; Decrypted memory; Taint analysis (ID#: 15-6652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878965&isnumber=6878950


Shao Shuai; Dong Guowei; Guo Tao; Yang Tianchang; Shi Chenjie, “Analysis on Password Protection in Android Applications,” P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, vol., no., pp. 504, 507, 8-10 Nov. 2014. doi:10.1109/3PGCIC.2014.102
Abstract: Although there has been much research on the leakage of sensitive data in Android applications, most of the existing research focus on how to detect the malware or adware that are intentionally collecting user privacy. There are not much research on analyzing the vulnerabilities of apps that may cause the leakage of privacy. In this paper, we present a vulnerability analyzing method which combines taint analysis and cryptography misuse detection. The four steps of this method are decompile, taint analysis, API call record, cryptography misuse analysis, all of which steps except taint analysis can be executed by the existing tools. We develop a prototype tool PW Exam to analysis how the passwords are handled and if the app is vulnerable to password leakage. Our experiment shows that a third of apps are vulnerable to leak the users’ passwords.
Keywords: cryptography; data privacy; mobile computing; smart phones; API call record; Android applications; PW Exam; cryptography misuse analysis; cryptography misuse detection; decompile step; password leakage; password protection; taint analysis; user privacy; vulnerability analyzing method; Androids; Encryption; Humanoid robots; Privacy; Smart phones; Android apps; leakage; password; vulnerability (ID#: 15-6653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024636&isnumber=7024297


Bo Wu; Mengjun Li; Bin Zhang; Quan Zhang; Chaojing Tang, “Directed Symbolic Execution for Binary Vulnerability Mining,” Electronics, Computer and Applications, 2014 IEEE Workshop on, vol., no., pp. 614, 617, 8-9 May 2014. doi:10.1109/IWECA.2014.6845694
Abstract: Despite more than two decades of independent, academic, and industry-related research, software vulnerabilities remain the main reason that undermine the security of our systems. Taint analysis and symbolic execution are among the most promising approaches for vulnerability detection, but either one can't remit the problem separately. In this paper, we try to combine taint analysis and symbolic execution for binary vulnerability mining and proposed a method named directed symbolic execution. Our three-step approach firstly adopts dynamic taint analysis technology to identify the safety-related data, and then uses symbolic execution system to execute the binary software while marks those safety-related data as symbols, and finally discovers vulnerabilities with our check-model. The evaluation shows that our method can be used to detect vulnerabilities in binary software more efficiently.
Keywords: data mining; program diagnostics; security of data; software reliability; binary software; binary vulnerability mining; check-model; directed symbolic execution method; dynamic taint analysis technology; safety-related data identification; software vulnerability detection; Context; Protocols; Software; Symbolic Execution; Vulnerability detection; Vulnerability model (ID#: 15-6654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845694&isnumber=6845536


Mell, P.; Harang, R.E., “Using Network Tainting to Bound the Scope of Network Ingress Attacks,” Software Security and Reliability (SERE), 2014 Eighth International Conference on, vol., no., pp. 206, 215, June 30 2014–July 2 2014. doi:10.1109/SERE.2014.34
Abstract: This research describes a novel security metric, network taint, which is related to software taint analysis. We use it here to bound the possible malicious influence of a known compromised node through monitoring and evaluating network flows. The result is a dynamically changing defense-in-depth map that shows threat level indicators gleaned from monotonically decreasing threat chains. We augment this analysis with concepts from the complex networks research area in forming dynamically changing security perimeters and measuring the cardinality of the set of threatened nodes within them. In providing this, we hope to advance network incident response activities by providing a rapid automated initial triage service that can guide and prioritize investigative activities.
Keywords: network theory (graphs); security of data; defense-in-depth map; network flow evaluation; network flow monitoring; network incident response activities; network ingress attacks; network tainting metric; security metric; security perimeters; software taint analysis; threat level indicators; Algorithm design and analysis; Complex networks; Digital signal processing; Measurement; Monitoring; Security; Software; complex networks; network tainting; scale-free; security (ID#: 15-6655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895431&isnumber=6895396


Chenxiong Qian; Xiapu Luo; Yuru Shao; Chan, A.T.S., “On Tracking Information Flows through JNI in Android Applications,” Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, vol., no., pp. 180, 191, 23-26 June 2014. doi:10.1109/DSN.2014.30
Abstract: Android provides native development kit through JNI for developing high-performance applications (or simply apps). Although recent years have witnessed a considerable increase in the number of apps employing native libraries, only a few systems can examine them. However, none of them scrutinizes the interactions through JNI in them. In this paper, we conduct a systematic study on tracking information flows through JNI in apps. More precisely, we first perform a large-scale examination on apps using JNI and report interesting observations. Then, we identify scenarios where information flows uncaught by existing systems can result in information leakage. Based on these insights, we propose and implement NDroid, an efficient dynamic taint analysis system for checking information flows through JNI. The evaluation through real apps shows NDroid can effectively identify information leaks through JNI with low performance overheads.
Keywords: Android (operating system); Java; Android applications; JNI; Java Native Interface; NDroid systems; high-performance applications; information flow tracking; Androids; Context; Engines; Games; Humanoid robots; Java; Libraries (ID#: 15-6656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903578&isnumber=6903544


Zhongyuan Qin; Yuqing Xu; Yuxing Di; Qunfang Zhang; Jie Huang, “Android Malware Detection Based on Permission and Behavior Analysis,” Cyberspace Technology (CCT 2014), International Conference on, vol., no., pp. 1, 4, 8-10 Nov. 2014. doi:10.1049/cp.2014.1352
Abstract: The development of mobile Internet and application store accelerates the spread of malicious applications on smartphones, especially on Android platform. In this paper, we propose an integrated Android malware detection scheme, combining permission and behavior analysis. For APK files which had been detected, their MD5 values are extracted as signature for detection. For APK files which had not been detected, detection is carried based on permission and behavior analysis. Behavior analysis contained taint propagation analysis and semantic analysis. Experiment results show that this system can detect the malware of privacy stealing and malicious deduction successfully.
Keywords: Internet; data privacy; invasive software; mobile computing; smart phones; APK files; MD5 value extraction; application store; behavior analysis; integrated Android malware detection scheme; mobile Internet development; permission analysis; propagation analysis; semantic analysis; smart phones; Android; behavior analysis; malware; permission (ID#: 15-6657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106851&isnumber=7085695 


Wenmin Xiao; Jianhua Sun; Hao Chen; Xianghua Xu, “Preventing Client Side XSS with Rewrite Based Dynamic Information Flow,” Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on, vol., no., pp. 238, 243, 13-15 July 2014. doi:10.1109/PAAP.2014.10
Abstract: This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
Keywords: Internet; Java; data flow analysis; online front-ends; security of data; JSTFlow; JavaScript abstract syntax tree; JavaScript code; Web applications; XSS attacks; abstract intermediate representation; browser proxy; browsers; client side XSS; code rewrite; fine-grained isolation; information flow tracking framework; performance overhead; rewrite based dynamic information flow; sensitive information leaks; taint engine; tainted information flow; Abstracts; Browsers; Data models; Engines; Security; Semantics; Syntactics; JavaScript; cross-site scripting; information flow analysis; information security; taint model (ID#: 15-6658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916471&isnumber=6916413
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Zero-Day Exploits, Part 1

 

 
SoS Logo

Zero–Day Exploits

Part 1


Zero–day exploits are a major research challenge in cybersecurity. Recent work on this subject has been conducted globally. The works cited here were presented in 2014 and early 2015.



Bazzi, A.; Onozato, Y., “Preventing Attacks in Real-Time through the Use of a Dummy Server,” Autonomous Decentralized Systems (ISADS), 2015 IEEE Twelfth International Symposium on, vol., no., pp. 236, 241, 25-27 March 2015. doi:10.1109/ISADS.2015.36
Abstract: Zero-day exploits against servers pose one of the most challenging problems faced by system and security administrators. Current solutions rely mainly on signature databases of known attacks and are not efficient at detecting new attacks not covered by their attack signature database. We propose using a dummy server, i.e. A mirror of the server to be protected but without the real data. Consequently, any incoming network packet is first tested against the dummy server and once it is ensured that the packet is benign, it is delivered to the real server. This would prevent all types of attacks, including those based on zero-day exploits, from reaching the protected server.
Keywords: program debugging; security of data; attack signature database; dummy server; network packet; zero-day exploits; Databases; IP networks; Intrusion detection; Routing protocols; Servers; Software (ID#: 15-6193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098265&isnumber=7098213


Mayo, Jackson R.; Armstrong, Robert C.; Hulette, Geoffrey C., “Digital System Robustness via Design Constraints: The Lesson of Formal Methods,” Systems Conference (SysCon), 2015 9th Annual IEEE International, vol., no., pp. 109, 114, 13-16 April 2015. doi:10.1109/SYSCON.2015.7116737
Abstract: Current programming languages and programming models make it easy to create software and hardware systems that fulfill an intended function but also leave such systems open to unintended function and vulnerabilities. Software engineering and code hygiene may make systems incrementally safer, but do not produce the wholesale change necessary for secure systems from the outset. Yet there exists an approach with impressive results: We cite recent examples showing that formal methods, coupled with formally informed digital design, have produced objectively more robust code even beyond the properties directly proven. Though discovery of zero-day vulnerabilities is almost always a surprise and powerful tools like semantic fuzzers can cover a larger search space of vulnerabilities than a developer can conceive of, formal models seem to produce robustness of a higher qualitative order than traditionally developed digital systems. Because the claim is necessarily a qualitative one, we illustrate similar results with an idealized programming language in the form of Boolean networks where we have control of parameters related to stability and adaptability. We argue that verifiability with formal methods is an instance of broader design constraints that promote robustness. We draw analogies to real-world programming models and languages that can be mathematically reasoned about in contrast to ones that are essentially undecidable.
Keywords: Computational modeling; Computer languages; Digital systems; Hardware; Programming; Robustness; Software; Digital design; complex systems; formal methods; programming models; robustness; security (ID#: 15-6194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116737&isnumber=7116715 


Kaur, R.; Singh, M., “Efficient Hybrid Technique for Detecting Zero-Day Polymorphic Worms,” Advance Computing Conference (IACC), 2014 IEEE International, vol., no., pp. 95, 100, 21-22 Feb. 2014. doi:10.1109/IAdCC.2014.6779301
Abstract: This paper presents an efficient technique for detecting zero-day polymorphic worms with almost zero false positives. Zero-day polymorphic worms not only exploit unknown vulnerabilities but also change their own representations on each new infection or encrypt their payloads using a different key per infection. Thus, there are many variations in the signatures for the same worm, making fingerprinting very difficult. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. If these zero-day worms are not detected and contained at right time, they can potentially disable the Internet or can wreak serious havoc. So the detection of Zero-day polymorphic worms is of paramount importance.
Keywords: Internet; cryptography; digital signatures; invasive software; Internet hosts; encryption; fingerprinting; hybrid technique; signatures; unknown vulnerabilities; zero false positives; zero-day polymorphic worm detection; Algorithm design and analysis; Grippers; Internet; Malware; Payloads; Registers; Sensors; Zero-day attack; hybrid system; intrusion detection; polymorphic worm (ID#: 15-6195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779301&isnumber=6779283 


Holm, H., “Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter?,” System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp. 4895, 4904, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.600
Abstract: A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days’ to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days’ (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days’ are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%.
Keywords: computer network security; digital signatures; SNIDS; false alarm; signature based network intrusion detection; zero day attacks; zero day detection; Computer architecture; Payloads; Ports (Computers); Reliability; Servers; Software; Testing; Computer security; NIDS; code injection; exploits (ID#: 15-6196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759203&isnumber=6758592 


Javed, A.; Akhlaq, M., “On the Approach of Static Feature Extraction in Trojans to Combat against Zero-Day Threats,” IT Convergence and Security (ICITCS), 2014 International Conference on, vol., no., pp. 1, 5, 28-30 Oct. 2014. doi:10.1109/ICITCS.2014.7021794
Abstract: Over the past few years, the enormous challenge ever faced by cyber space is to combat against cyber threats in the shape of malware attacks. Of these, Trojans stands out as the most common choice due to its deceptive and alluring properties. Most of the modern / sophisticated malwares are polymorphic in nature, thus signature / heuristics based techniques are becoming out of scope in outraging zero-day threats. By and large Trojan and its numerous variants have common static features which are always existent in such malwares. By exploiting this analogy, a set of features is determined by analyzing known samples which can be effectively plied for combating against zero-day attacks launched by means of unknown malicious codes.
Keywords: feature extraction; invasive software; Trojan; cyber space; malicious codes; malware attacks; signature-heuristics based techniques; static feature extraction; zero-day threats; Electronic mail; Feature extraction; Grippers; Software; Trojan horses (ID#: 15-6197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7021794&isnumber=7021698 


Shahzad, K.; Woodhead, S., “Towards Automated Distributed Containment of Zero-Day Network Worms,” Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, vol., no., pp. 1, 7, 11-13 July 2014. doi:10.1109/ICCCNT.2014.6963119
Abstract: Worms are a serious potential threat to computer network security. The high potential speed of propagation of worms and their ability to self-replicate make them highly infectious. Zero-day worms represent a particularly challenging class of such malware, with the cost of a single worm outbreak estimated to be as high as US $2.6 Billion. In this paper, we present a distributed automated worm detection and containment scheme that is based on the correlation of Domain Name System (DNS) queries and the destination IP address of outgoing TCP SYN and UDP datagrams leaving the network boundary. The proposed countermeasure scheme also utilizes cooperation between different communicating scheme members using a custom protocol, which we term Friends. The absence of a DNS lookup action prior to an outgoing TCP SYN or UDP datagram to a new destination IP addresses is used as a behavioral signature for a rate limiting mechanism while the Friends protocol spreads reports of the event to potentially vulnerable uninfected peer networks within the scheme. To our knowledge, this is the first implementation of such a scheme. We conducted empirical experiments across six class C networks by using a Slammer-like pseudo-worm to evaluate the performance of the proposed scheme. The results show a significant reduction in the worm infection, when the countermeasure scheme is invoked.
Keywords: computer network security; digital signatures; invasive software; protocols; DNS queries; Friends protocol; Slammer-like pseudoworm; TCP SYN datagrams; UDP datagrams; automated distributed containment; behavioral signature; communicating scheme members; computer network security; countermeasure scheme; custom protocol; destination IP address; distributed automated worm detection; domain name system queries; malware; network boundary; rate limiting mechanism; six class C networks; worm infection reduction; zero-day network worms; Grippers; IP networks; Internet; Limiting; Malware; Routing protocols; countermeasure; malware; network worm; rate limiting (ID#: 15-6198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963119&isnumber=6962988 


Zolotukhin, M.; Hamalainen, T., “Detection of Zero-Day Malware Based on the Analysis of Opcode Sequences,” Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, vol., no., pp. 386, 391, 10-13 Jan. 2014. doi:10.1109/CCNC.2014.6866599
Abstract: Today, rapid growth in the amount of malicious software is causing a serious global security threat. Unfortunately, widespread signature-based malware detection mechanisms are not able to deal with constantly appearing new types of malware and variants of existing ones, until an instance of this malware has damaged several computers or networks. In this research, we apply an anomaly detection approach which can cope with the problem of new malware detection. First, executable files are analyzed in order to extract operation code sequences and then n-gram models are employed to discover essential features from these sequences. A clustering algorithm based on the iterative usage of support vector machines and support vector data descriptions is applied to analyze feature vectors obtained and to build a benign software behavior model. Finally, this model is used to detect malicious executables within new files. The scheme proposed allows one to detect malware unseen previously. The simulation results presented show that the method results in a higher accuracy rate than that of the existing analogues.
Keywords: invasive software; iterative methods; pattern clustering; support vector machines; anomaly detection approach; benign software behavior model; clustering algorithm; global security threat; iterative usage; malicious software; n-gram models; opcode sequences analysis; operation code sequences; support vector data descriptions; support vector machines; widespread signature-based malware detection mechanism; zero-day malware detection; Feature extraction; Malware; Software; Software algorithms; Support vector machines; Training; Vectors (ID#: 15-6199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866599&isnumber=6866537 


Shahzad, K.; Woodhead, S., “A Pseudo-Worm Daemon (PWD) for Empirical Analysis of Zero-Day Network Worms and Countermeasure Testing,” Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, vol., no., pp. 1, 6, 11-13 July 2014. doi:10.1109/ICCCNT.2014.6963124
Abstract: The cyber epidemiological analysis of computer worms has emerged a key area of research in the field of cyber security. In order to understand the epidemiology of computer worms; a network daemon is required to empirically observe their infection and propagation behavior. The same facility can also be employed in testing candidate worm countermeasures. In this paper, we present the architecture and design of Pseudo-Worm Daemon; termed (PWD), which is designed to perform true random scanning and hit-list worm like functionality. The PWD is implemented as a proof-of-concept in C programming language. The PWD is platform independent and can be deployed on any host in an enterprise network. The novelty of this worm daemon includes; its UDP based propagation, a user-configurable random scanning pool, ability to contain a user defined hit-list, authentication before infecting susceptible hosts and efficient logging of time of infection. Furthermore, this paper presents experimentation and analysis of a Pseudo-Witty worm by employing the PWD with real Witty worm outbreak attributes. The results obtained by Pseudo-Witty worm outbreak are quite comparable to real Witty worm outbreak; which are further quantified by using the Susceptible Infected (SI) model.
Keywords: C language; invasive software; program testing; C programming language; PWD; UDP based propagation; computer worms; cyber epidemiological analysis; cyber security; enterprise network; hit-list worm like functionality; pseudo-witty worm outbreak; pseudo-worm daemon; random scanning functionality; susceptible infected model; user-configurable random scanning pool; worm countermeasure testing; worm infection behavior; worm propagation behavior; zero-day network worms; Computational modeling; Computer worms; Grippers; IP networks; Mathematical model; Servers; Silicon; cyber; hit-list; scanning; witty; worm (ID#: 15-6200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963124&isnumber=6962988 


Asif, M.K.; Al-Harthi, Y.S., “Intrusion Detection System Using Honey Token Based Encrypted Pointers to Mitigate Cyber Threats for Critical Infrastructure Networks,” Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, vol., no., pp. 1266, 1270, 5-8 Oct. 2014. doi:10.1109/SMC.2014.6974088
Abstract: Recent advancements in cyberspace impose a greater threat to the security of critical infrastructure than ever before. The scale of damage that could be done on these infrastructures by well-planned cyber-attacks is enormous. Most of the research work done for the security of these critical infrastructures focuses on conventional security measures. In this paper, we designed an Intrusion Detection System (IDS) that is based on the novel approach of Honey Token based Encrypted Pointers to prevent critical infrastructure networks from cyber-attacks particularly from zero day cyber threats. These honey tokens inside the frame will serve as a trap for the attacker. All nodes operating within the working domain of critical infrastructure network are divided into four different pools. This division is based according to their computational power and level of vulnerability. These pools are provided with different levels of security measures within the network. IDS use different number of Honey Tokens (HT) per frame for every different pool. Moreover every pool uses different types of encryption schemes (AES-128,192,256) etc. We use critical infrastructure network of 64 nodes for our simulations. We analyzed the performance of IDS in terms of True Positive and False Negative Alarms. Finally we test this IDS through Network Penetration Testing (NPT). This NPT is accomplished by putting the critical infrastructure network of 64 nodes directly under the zero day cyber-attacks and then we analyze the behavior of the IDS under such realistic conditions. The IDS is designed in such a way that it not only detects the intrusions but also recovers the entire zero day attack using reverse engineering approach.
Keywords: computer network security; critical infrastructures; cryptography; reverse engineering; IDS; NPT; critical infrastructure networks; cyber-attacks; encryption schemes; false negative alarms; honey token based encrypted pointers; intrusion detection system; network penetration testing; reverse engineering approach; true positive alarms; zero day cyber threats; Databases; Encryption; Generators; Intrusion detection; Protocols; Critical Infrastructure Networks; Cyber Security; Cyber Space; Cyber Threats; Cyber Warfare; DNP3; Distributed Sensor Networks; Encrypted Pointers; Honey Token; Industrial Communication Protocol; Industrial Networks; Information Infrastructure; Information Security; Intelligence Infrastructure; Intrusion Detection System; SCADA Command and Control System; Zero Day Attacks (ID#: 15-6201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974088&isnumber=6973862


Tokhtabayev, A.G.; Aimyshev, B.; Seitkulov, Y., “tBox: A System to Protect a ‘Bad’ User from Targeted and User-Oriented Attacks,” Application of Information and Communication Technologies (AICT), 2014 IEEE 8th International Conference on, vol., no., pp. 1, 6, 15-17 Oct. 2014. doi:10.1109/ICAICT.2014.7035913
Abstract: We introduce tBox system that enables protection from targeted and user-oriented attacks. Such attacks relay on users mistakes such as misinterpreting or ignoring security alerts, which leads to proliferation of malicious objects inside trusted perimeter of cyber-security systems (e.g. exclusion list of AV). These attacks include strategic web compromise, spear phishing, insider threat and social network malware. Moreover, targeted attacks often deliver zero-day malware that is made difficult to be detected, e.g. due to distributed malicious payload. The tBox system allows for protecting even a "bad" user who does not cooperate with security products. To accomplish this, tBox seamlessly transfers user activity with vulnerable applications into specific virtual environment that provides three key factors: user-activity isolation, behavior self-monitoring and security inheritance for user-carried objects. To provide self-monitoring, our team developed a novel technology for deep dynamic analysis of system-wide behavior, which allows for run-time recognition of malicious functionalities including obfuscated and distributed ones. We evaluate the tBox prototype with corpus of real malware families. Results show high efficiency of tBox in detecting and blocking malware while having low system overhead.
Keywords: Internet; invasive software; behavior self-monitoring; cyber-security systems; distributed malicious payload; insider threat; security alerts; security inheritance; social network malware; spear phishing; strategic Web compromise; tBox system; targeted attacks; user-activity isolation; user-oriented attacks; zero-day malware; Browsers; Containers; Engines; Malware; Payloads; Software; Attacks on a User; Distributed malware; Targeted attacks; Threat isolation; Zero-day malware (ID#: 15-6202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035913&isnumber=7035893


Mirza, N.A.S.; Abbas, H.; Khan, F.A.; Al Muhtadi, J., “Anticipating Advanced Persistent Threat (APT) Countermeasures Using Collaborative Security Mechanisms,” Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, vol., no., pp. 129, 132, 26-27 Aug. 2014. doi:10.1109/ISBAST.2014.7013108
Abstract: Information and communication security has gained significant importance due to its wide spread use, increased sophistication and complexity in its deployment. On the other hand, more sophisticated and stealthy techniques are being practiced by the intruder’s group to penetrate and exploit the technology and attack detection. One such treacherous threat to all critical assets of an organization is Advanced Persistent Threat (APT). Since APT attack vector is not previously known, consequently this can harm the organization’s assets before the patch for this security flaw is released/available. This paper presents a preliminary research effort to counter the APT or zero day attacks at an early stage by detecting malwares. Open Source version of Security Information and Event Management (SIEM) is used to detect denial of service attack launched through remote desktop service. The framework presented in this paper also shows the efficiency of the technique and it can be enhanced with more sophisticated mechanisms for APT attack detection.
Keywords: computational complexity; invasive software; public domain software; APT attack detection; APT attack vector; SIEM; advanced persistent threat countermeasures; collaborative security mechanisms; deployment complexity; information and communication security; malwares; open source version; organization assets; remote desktop service; security information and event management; stealthy techniques; zero day attacks; Intrusion detection; Kernel; Malware; Monitoring; Neural networks; Organizations; Advanced Persistent Threat; Security Information and Event Management; Zero Day Exploits (ID#: 15-6203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013108&isnumber=7013076 


Pandey, S.K.; Mehtre, B.M., “Performance of Malware Detection Tools: A Comparison,” Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, vol., no., pp. 1811,1817, 8-10 May 2014. doi:10.1109/ICACCCT.2014.7019422
Abstract: Malwares are a big threat to modern computer world. There are many tools and techniques for detecting malwares, like Intrusion Detection System, Firewalls and Virus scans etc. But malicious executables like unseen zero day malwares are still a major challenge. In this paper, we are going to present a performance comparison of existing tools and techniques for malware detection. In order to know the performance of malware detection tools, we have created a virtual Malware analysis lab using virtual box. We have taken 17 most commonly known malware detection tools and 29 malwares as a data set for our comparison. We have tested and analyzed the performance of malware detection tools on the basis of several parameters which are also shown graphically. It is found that the top three tools (based on certain parameters and the given data set) are the Regshot, Process Monitor and Process Explorer.
Keywords: computer viruses; firewalls; Regshot; firewalls; intrusion detection system; malicious executables; malware detection tools; process explorer; process monitor; unseen zero day malwares; virtual box; virtual malware analysis lab; virus scans; Cryptography; Firewalls (computing); Grippers; Immune system; Pattern matching; Trojan horses; Cyber Defense; Intrusion Detection System; Malicious executables; Malware; Malware Analysis; Zero Day Malwares (ID#: 15-6204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019422&isnumber=7019129 


Pandey, S.K.; Mehtre, B.M., “A Lifecycle Based Approach for Malware Analysis,” Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, vol., no., pp. 767, 771, 7-9 April 2014. doi:10.1109/CSNT.2014.161
Abstract: Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software’s.
Keywords: invasive software; anomaly based approach; attack preparation; client-side exploit; device infection; egg download; hiding techniques; inbound attack; inbound scan; lifecycle based approach; local reconnaissance; malicious outbound propagation; malware analysis; network surveillance; peer coordination; signature-based approach; specification-based detection; spontaneous attack; Computers; Educational institutions; Malware; Monitoring; Reconnaissance; Metamorphic; Polymorphic; Reconnaissance; Signature based; Zero day attack (ID#: 15-6205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821503&isnumber=6821334 


Kumar, S.; Rama Krishna, C.; Aggarwal, N.; Sehgal, R.; Chamotra, S., “Malicious Data Classification Using Structural Information and Behavioral Specifications in Executables,” Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, vol., no., pp. 1, 6, 6-8 March 2014. doi:10.1109/RAECS.2014.6799525
Abstract: With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.
Keywords: Internet; invasive software; pattern classification; program diagnostics; NtTrace; antivirus; automated malicious programs; behavioral specifications; dynamic analysis; executable file; information systems; malicious behavior extraction; malicious data classification; malicious software detection; malicious software mitigation; malware detection system effectiveness improvement; malware detection system efficiency improvement; malwares; obfuscation technology; security devices; signature database; signature-based detection system; static analysis; structural information; threat detection; threat mitigation; underground Internet economy; zero-day malware detection; Algorithm design and analysis; Classification algorithms; Feature extraction; Internet; Malware; Software; Syntactics; behavioral specifications; classification algorithms; dynamic analysis; malware detection; static analysis; system calls (ID#: 15-6206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799525&isnumber=6799496 


Sigholm, J.; Larsson, E., “Determining the Utility of Cyber Vulnerability Implantation: The Heartbleed Bug as a Cyber Operation,” Military Communications Conference (MILCOM), 2014 IEEE, vol., no., pp. 110, 116, 6-8 Oct. 2014. doi:10.1109/MILCOM.2014.25
Abstract: Flaws in computer software or hardware that are as yet unknown to the public, known as zero-day vulnerabilities, are an increasingly sought-after resource by actors conducting cyber operations. While the objective pursued is commonly defensive, as in protecting own systems and networks, cyber operations may also involve exploiting identified vulnerabilities for intelligence collection or to produce military effects. The weapon zing and stockpiling of such vulnerabilities by various actors, or even the intentional implantation into cyberspace infrastructure, is a trend that currently resembles an arms race. An open question is how to measure the utility that access to these exploitable vulnerabilities provides for military purposes, and how to contrast and compare this to the possible adverse societal consequences that withholding disclosure of them may result in, such as loss of privacy or impeded freedom of the press. This paper presents a case study focusing on the Heart bleed bug, used as a tool in an offensive cyber operation. We introduce a model to estimate the adoption rate of an implanted flaw in Open SSL, derived by fitting collected real-world data. Our calculations show that reaching a global adoption of at least 50 % would take approximately three years from the time of release, given that the vulnerability remains undiscovered, while surpassing 75% adoption would take an estimated four years. The paper concludes that while exploiting zero-day vulnerabilities may indeed be of significant military utility, such operations take time. They may also incur non-negligible risks of collateral damage and other societal costs.
Keywords: program debugging; security of data; OpenSSL; collateral damage; computer software; cyber vulnerability implantation; cyberspace infrastructure; global adoption; heartbleed bug; identified vulnerabilities; intelligence collection; intentional implantation; military effects; military utility; offensive cyber operation; societal costs; sought-after resource; zero-day vulnerabilities; Fitting; Heart rate variability; Military aircraft; Predictive models; Security; Servers; Software; computer network operations; cyber operations; exploitation; intelligence; vulnerabilities (ID#: 15-6207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956746&isnumber=6956719 


Uppal, D.; Sinha, R.; Mehra, V.; Jain, V., “Malware Detection and Classification Based on Extraction of API Sequences,” Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, vol., no., pp. 2337, 2342, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968547
Abstract: With the substantial growth of IT sector in the 21st century, the need for system security has also become inevitable. While the developments in the IT sector have innumerable advantages but attacks on websites and computer systems are also increasing relatively. One such attack is zero day malware attack which poses a great challenge for the security testers. The malware pen testers can use bypass techniques like Compression, Code obfuscation and Encryption techniques to easily deceive present day Antivirus Scanners. This paper elucidates a novel malware identification approach based on extracting unique aspects of API sequences. The proposed feature selection method based on N grams and odds ratio selection, capture unique and distinct API sequences from the extracted API calls thereby increasing classification accuracy. Next a model is built by the classification algorithms using active machine learning techniques to categorize malicious and benign files.
Keywords: application program interfaces; invasive software; learning (artificial intelligence); pattern classification; API sequences extraction; IT sector; N grams; Websites; active machine learning techniques; antivirus scanners; benign files; bypass techniques; code obfuscation; computer systems; encryption techniques; malicious files; malware classification; malware detection; malware pen testers; odds ratio selection; security testers; zero day malware attack; Accuracy; Classification algorithms; Feature extraction; Machine learning algorithms; Malware; Software; API call gram; API sequence (ID#: 15-6208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968547&isnumber=6968191 


Kotenko, I.; Doynikova, E., “Security Evaluation for Cyber Situational Awareness,” High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, vol., no., pp. 1197,1204, 20-22 Aug. 2014. doi:10.1109/HPCC.2014.196
Abstract: The paper considers techniques for measurement and calculation of security metrics taking into account attack graphs and service dependencies. The techniques are based on several assessment levels (topological, attack graph level, attacker level, events level and system level) and important aspects (zero-day attacks, cost-efficiency characteristics). It allows understanding the current security situation, including defining the vulnerable characteristics and weaknesses of the system under protection, dangerous events, current and possible cyber attack parameters, attacker intentions, integral cyber situation metrics and necessary countermeasures.
Keywords: firewalls; attack countermeasures; attack graph level; attack graphs; attacker intentions; attacker level; cost-efficiency characteristics; cyber attack parameters; cyber situational awareness; dangerous events; event level; integral cyber situation metrics; security evaluation; security metric calculation; security metric measurement; service dependencies; system level; system weaknesses; topological assessment level; vulnerable characteristics; zero-day attacks; Business; Conferences; High performance computing; Integrated circuits; Measurement; Probabilistic logic; Security; network security; risk assessment; security metrics (ID#: 15-6209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056895&isnumber=7056577 


Markel, Z.; Bilzor, M., “Building a Machine Learning Classifier for Malware Detection,” Anti-malware Testing Research (WATeR), 2014 Second Workshop on, vol., no., pp. 1, 4, 23-23 Oct. 2014. doi:10.1109/WATeR.2014.7015757
Abstract: Current signature-based antivirus software is ineffective against many modern malicious software threats. Machine learning methods can be used to create more effective antimalware software, capable of detecting even zero-day attacks. Some studies have investigated the plausibility of applying machine learning to malware detection, primarily using features from n-grams of an executables file’s byte code. We propose an approach that primarily learns from metadata, mostly contained in the headers of executable files, specifically the Windows Portable Executable 32-bit (PE32) file format. Our experiments indicate that executable file metadata is highly discriminative between malware and benign software. We also employ various machine learning methods, finding that Decision Tree classifiers outperform Logistic Regression and Naive Bayes in this setting. We analyze various features of the PE32 header and identify those most suitable for machine learning classifiers. Finally, we evaluate changes in classifier performance when the malware prevalence (fraction of malware versus benign software) is varied.
Keywords: decision trees; invasive software; learning (artificial intelligence); pattern classification; regression analysis; Windows Portable Executable file format; antimalware software; decision tree classifiers; logistic regression; machine learning classifier; malicious software threat; malware detection; malware prevalence; meta data; naive Bayes; signature-based antivirus software; zero-day attacks; Databases; Decision trees; Feature extraction; Logistics; Malware; Software; Training (ID#: 15-6210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015757&isnumber=7015747 


Ziyu Wang; Jiahai Yang; Fuliang Li, “An On-Line Anomaly Detection Method Based on a New Stationary Metric — Entropy-Ratio,” Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, vol., no., pp.90,97, 24-26 Sept. 2014. doi:10.1109/TrustCom.2014.16
Abstract: Anomaly detection has been a hot topic in recent years due to its capability of detecting zero day attacks. In this paper, we propose a new metric called Entropy-Ratio. We validate that the Entropy-Ratio is stationary. Making use of this observation, we combine the Least Mean Square algorithm and the Forward Linear Predictor to propose a new on-line detector called LMS-FLP detector. Using the two synthetic data sets - CEGI-6IX synthetic data and CERNET2 synthetic data, we validate that the LMS-FLP detector is very effective in detecting both anomalies involving many small IP flows and anomalies involving a few large IP flows.
Keywords: IP networks; computer network security; entropy; least mean squares methods; CEGI-6IX synthetic data set; CERNET2 synthetic data set; forward linear predictor; large-IP flows; least mean square algorithm; online LMS-FLP detector; online anomaly detection method; small-IP flows; stationary entropy-ratio metric; zero-day attack detection capability; Detectors; Educational institutions; Entropy; Equations; IP networks; Mathematical model; Vectors; Entropy-Ratio; Forward Linear Predictor; Least Mean Square; anomaly detection (ID#: 15-6211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011238&isnumber=7011202 


Rivers, A.T.; Vouk, M.A.; Williams, L.A., “On Coverage-Based Attack Profiles,” Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on,  vol., no., pp. 5, 6, June 30 2014–July 2 2014.  doi:10.1109/SERE-C.2014.15
Abstract: Automated cyber attacks tend to be schedule and resource limited. The primary progress metric is often “coverage” of pre-determined “known” vulnerabilities that may not have been patched, along with possible zero-day exploits (if such exist). We present and discuss a hypergeometric process model that describes such attack patterns. We used web request signatures from the logs of a production web server to assess the applicability of the model.
Keywords: Internet; security of data; Web request signatures; attack patterns; coverage-based attack profiles; cyber attacks; hypergeometric process model; production Web server; zero-day exploits; Computational modeling; Equations; IP networks; Mathematical model; Software; Software reliability; Testing; attack; coverage; models; profile; security (ID#: 15-6212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901633&isnumber=6901618 


Trikalinou, A.; Bourbakis, N., “AMYNA: A Security Generator Framework,” Information, Intelligence, Systems and Applications, IISA 2014, The 5th International Conference on, vol., no., pp. 404, 409, 7-9 July 2014. doi:10.1109/IISA.2014.6878840
Abstract: Security has always been an important concern in Computer Systems. In this paper we focus on zero-day, memory-based attacks, one of the top three most dangerous attacks according to the MITRE ranking, and propose AMYNA, a novel security generator framework/model, which can automatically create personalized optimum security solutions. Motivated by the most prevailing security methods, which target a limited set of attacks, but do so efficiently and effectively, we present the idea and architecture of AMYNA, which can automatically combine security methods represented in a high-level model in order to produce a security solution with elevated security coverage.
Keywords: security of data; AMYNA; MITRE ranking; computer systems; elevated security coverage; high-level model; memory-based attacks; personalized optimum security solutions; security generator framework; zero-day attacks; Computational modeling; Computer architecture; Libraries; Load modeling; Numerical models; Real-time systems; Security; buffer overflow; control-flow hijacking; dynamic information flow tainting (DIFT); host security (ID#: 15-6213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878840&isnumber=6878713 
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Zero-Day Exploits, Part 2

 

 
SoS Logo

Zero–Day Exploits

Part 2


Zero-day exploits are a major research challenge in cybersecurity. Recent work on this subject has been conducted globally. The works cited here were presented in 2014 and early 2015.



Adebayo, O.S.; AbdulAziz, N., “An Intelligence Based Model for the Prevention of Advanced Cyber-Attacks,” Information and Communication Technology for The Muslim World (ICT4M), 2014 The 5th International Conference on, vol., no., pp. 1, 5, 17-18 Nov. 2014. doi:10.1109/ICT4M.2014.7020648
Abstract: The trend and motive of Cyber-attacks have gone beyond traditional damages and challenges to information stealing for political and economic gain. With the recent APT (Advance Persistent Threat), which comprises of Zeroday malware, Polymorphic malware, and Blended threat, the task of protecting vita[l] infrastructures are increasingly becoming difficult. This paper proposes an intelligence based technique that combined the traditional signature based detection with the next generation based detection. The proposed model consists of virtual execution environment, detection, and prevention module. The virtual execution environment is designated to analyze and execute a suspected file contains malware while other module inspect, detect, and prevent malware execution based on the intelligent gathering in the central management system (CMS). The model based on Next Generation Malware Detection of creating threat intelligence for future occurrence prevention. The new model shall take into consideration lapses and benefits of the existing detectors.
Keywords: digital signatures; invasive software; APT; advance persistent threat; advanced cyber-attack prevention; blended threat; central management system; economic gain; future occurrence prevention; information stealing; intelligence based model; malware execution detection; malware execution inspection; malware execution prevention; next generation malware detection; political gain; polymorphic malware; signature based detection; suspected file analysis; suspected file execution; virtual execution environment; zero-day malware; Decision support systems; APT; Advanced Persistent Threat; Cyber Attacks; Next Generation Threat; Next-Generation Security (ID#: 15-6214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7020648&isnumber=7020577 


Min Zheng; Mingshen Sun; Lui, J.C.S., “DroidTrace: A Ptrace Based Android Dynamic Analysis System with Forward Execution Capability,” Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, vol., no.,
pp. 128, 133, 4-8 Aug. 2014. doi:10.1109/IWCMC.2014.6906344
Abstract: Android, being an open source smartphone operating system, enjoys a large community of developers who create new mobile services and applications. However, it also attracts malware writers to exploit Android devices in order to distribute malicious apps in the wild. In fact, Android malware are becoming more sophisticated and they use advanced “dynamic loading” techniques like Java reflection or native code execution to bypass security detection. To detect dynamic loading, one has to use dynamic analysis. Currently, there are only a handful of Android dynamic analysis tools available, and they all have shortcomings in detecting dynamic loading. The aim of this paper is to design and implement a dynamic analysis system which allows analysts to perform systematic analysis of dynamic payloads with malicious behaviors. We propose “DroidTrace”, a ptrace based dynamic analysis system with forward execution capability. Our system uses ptrace to monitor selected system calls of the target process which is running the dynamic payloads, and classifies the payloads behaviors through the system call sequence, e.g., behaviors such as file access, network connection, inter-process communication and even privilege escalation. Also, DroidTrace performs “physical modification” to trigger different dynamic loading behaviors within an app. Using DroidTrace, we carry out a large scale analysis on 36,170 dynamic payloads in 50,000 apps and 294 malware in 10 families (four of them are zero-day) with various dynamic loading behaviors.
Keywords: Android (operating system); Java; invasive software; mobile computing; program diagnostics; public domain software; Android malware; DroidTrace; Java reflection; dynamic loading detection; dynamic payload analysis; file access; forward execution capability; interprocess communication; malicious apps; malicious behaviors; mobile applications; mobile services; native code execution; network connection; open source smartphone operating system; physical modification; privilege escalation; ptrace based Android dynamic analysis system; security detection; system call monitoring; Androids; Humanoid robots; Java; Loading; Malware; Monitoring; Payloads (ID#: 15-6215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906344&isnumber=6906315 


Shin-Ying Huang; Yennun Huang; Suri, N., “Event Pattern Discovery on IDS Traces of Cloud Services,” Big Data and Cloud Computing (BdCloud), 2014 IEEE Fourth International Conference on, vol., no., pp. 25, 32, 3-5 Dec. 2014. doi:10.1109/BDCloud.2014.92
Abstract: The value of Intrusion Detection System (IDS) traces is based on being able to meaningfully parse the complex data patterns appearing therein as based on the pre-defined intrusion ‘detection’ rule sets. As IDS traces monitor large groups of servers, large amounts of network data and also spanning a variety of patterns, efficient analytical approaches are needed to address this big heterogeneous data analysis problem. We believe that using unsupervised learning methods can help to classify data that allows analysts to find out meaningful insights and extract the value of the collected data more precisely and efficiently. This study demonstrates how the technique of growing hierarchical self-organizing maps (GHSOM) can be utilized to facilitate efficient event data analysis. For the collected IDS traces, GHSOM is used to cluster data and reveal the geometric distances between each cluster in a topological space such that the attack signatures for each cluster can be easily identified. The experimental results from a real-world IDS traces show that our proposed approach can efficiently discover several critical attack patterns and significantly reduce the size of IDS trace log which needs to be further analyzed. The proposed approach can help internet security administrators/analysts to conduct network forensics analysis, discover suspicious attack sources, and set up recovery processes to prevent previously unknown security threats such as zero-day attacks.
Keywords: cloud computing; data analysis; digital signatures; pattern classification; pattern clustering; self-organising feature maps; unsupervised learning; GHSOM; IDS traces; Internet security administrators; Internet security analysts; analytical approach; attack signatures; cloud services; cluster geometric distances; complex data pattern parsing; critical attack patterns; data classification; data clustering; event data analysis; event pattern discovery; growing hierarchical self-organizing maps; heterogeneous data analysis problem; intrusion detection rule sets; intrusion detection system; network forensics analysis; recovery process; suspicious attack source discovery; topological space; unsupervised learning methods; Correlation; Data mining; IP networks; Intrusion detection; Ports (Computers);Telecommunication traffic; forensic analysis; growing hierarchical self-organizing map; internet security (ID#: 15-6216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034762&isnumber=7034739 


Almaatouq, A.; Alabdulkareem, A.; Nouh, M.; Alsaleh, M.; Alarifi, A.; Sanchez, A.; Alfaris, A.; Williams, J., “A Malicious Activity Detection System Utilizing Predictive Modeling in Complex Environments,” Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, vol., no., pp. 371, 379, 10-13 Jan. 2014. doi:10.1109/CCNC.2014.6866597
Abstract: Complex enterprise environments consist of globally distributed infrastructure with a variety of applications and a large number of activities occurring on a daily basis. This increases the attack surface and narrows the view of ongoing intrinsic dynamics. Thus, many malicious activities can persist under the radar of conventional detection mechanisms long enough to achieve critical mass for full-fledged cyber attacks. Many of the typical detection approaches are signature-based and thus are expected to fail in the face of zero-day attacks. In this paper, we present the building-blocks for developing a Malicious Activity Detection System (MADS). MADS employs predictive modeling techniques for the detection of malicious activities. Unlike traditional detection mechanisms, MADS includes the detection of both network-based intrusions and malicious user behaviors. The system utilizes a simulator to produce holistic replication of activities, including both benign and malicious, flowing within a given complex IT environment. We validate the performance and accuracy of the simulator through a case study of a Fortune 500 company where we compare the results of the simulated infrastructure against the physical one in terms of resource consumption (i.e., CPU utilization), the number of concurrent users, and response times. In addition to an evaluation of the detection algorithms with varying hyper-parameters and comparing the results.
Keywords: computer network security; complex environments; malicious activity detection system; predictive modeling; resource consumption; Analytical models; Data models; Data visualization; Databases; Engines; Predictive models; Security (ID#: 15-6217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866597&isnumber=6866537 


Andreas Kuehn, Milton Mueller; “Shifts in the Cybersecurity Paradigm: Zero-Day Exploits, Discourse, and Emerging Institutions,” NSPW ’14, Proceedings of the 2014 Workshop on New Security Paradigms Workshop, September 2014, Pages 63-68. doi:10.1145/2683467.2683473
Abstract: This ongoing dissertation research examines the institutionalization of new cybersecurity norms and practices that are emerging from current controversies around markets for software vulnerabilities and exploits. A market has developed for the production and distribution of software exploits, with buyers sometimes paying over USD 100,000 for exploits and software vendors offering bounties for the disclosure of underlying vulnerabilities. Labeled a ‘digital arms race’ by some, it is generating a transnational debate about control and regulation of cyber capabilities, the role of secrecy and disclosure in cybersecurity, and the ethics of exploit production and use. The research takes a qualitative approach to theorize the emerging cybersecurity institutions. It shall provide insights into the technical, economic and institutional shifts in cybersecurity norms and practices. Analyzing the bug bounty programs run by Microsoft and Facebook as examples, the paper briefly discusses the role of institutions in facilitating software vulnerability markets. The paper summarizes the work presented at NSPW 2014, its findings are preliminary.
Keywords: cybersecurity, discourse, institutions, internet governance, software exploit, software vulnerability (ID#: 15-6218)
URL: http://doi.acm.org/10.1145/2683467.2683473 


Yasuyuki Tanaka, Atsuhiro Goto; “n-ROPdetector: Proposal of a Method to Detect the ROP Attack Code on the Network,” SafeConfig ’14, Proceedings of the 2014 Workshop on Cyber Security Analytics, Intelligence and Automation, November 2014, Pages 33-36. doi:10.1145/2665936.2665937
Abstract: Targeted attacks exploiting a zero-day vulnerability are serious threats for many organizations. One reason is that generally available attack tools are very powerful and easy-to-use for attackers. In this paper, we propose n-ROPdetector that detects ROP (Return-Oriented Programming) attack code on the network side. ROP is a core technique used in zero-day attacks. The n-ROPdetector is noticeable method to detect ROP code efficiently on the network side rather than on the host machines side. To evaluate the n-ROPdetector and to show its effectiveness, we used the attack code samples from the attack tool Metasploit and the n-ROPdetector detected 84% of ROP codes in Metasploit.
Keywords: nids, return-oriented programming, zero-day attack (ID#: 15-6219)
URL:  http://doi.acm.org/10.1145/2665936.2665937  


Yier Jin; “Embedded System Security in Smart Consumer Electronics,” TrustED ’14, Proceedings of the 4th International Workshop on Trustworthy Embedded Devices, November 2014, Pages 59-59. doi:10.1145/2666141.2673888
Abstract: Advances in manufacturing and emerging technologies in miniaturization and reduction of power consumption have proven to be a pivotal point in mankind’s progress. The once advanced machines that occupied entire buildings and needed hundreds of engineers to be operated are now shadowed by the smart cellular phones we carry in our pockets. With the advent of the Internet and proliferation of wireless technologies, these devices are now extremely interconnected. Enter the nascent era of Internet of Things (IoT) and wearable devices, where small embedded devices loaded with sensors collect information from its surroundings, process it and relay it to remote locations for further analysis. Albeit looking harmless, this nascent technologies raise security and privacy concerns. In this talk, we pose the question of the possibility and effects of compromising one of such devices. Concentrating on the design flow of IoT devices, we discuss some common design practices and their implications on security and privacy. We present the Google Nest Learning Thermostat as an example on how these practices affect the resulting device and the potential consequences to user security and privacy. We will then introduce design flow security enhancement methods through which security will be built into the device, a major difference from traditional practices which treat security as an add-on property implemented at post-fabrication stage.
Keywords: hardware attack, hardware security, internet of things, secure boot, trusted design, zero-day attack (ID#: 15-6220)
URL:  http://doi.acm.org/10.1145/2666141.2673888 


Robert Gawlik, Thorsten Holz; “Towards Automated Integrity Protection of C++ Virtual Function Tables in Binary Programs,” ACSAC ’14, Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 396-405. doi:10.1145/2664243.2664249
Abstract: Web browsers are one of the most used, complex, and popular software systems nowadays. They are prone to dangling pointers that result in use-after-free vulnerabilities and this is the de-facto way to exploit them. From a technical point of view, an attacker uses a technique called vtable hijacking to exploit such bugs. More specifically, she crafts bogus virtual tables and lets a freed C++ object point to it in order to gain control over the program at virtual function call sites.  In this paper, we present a novel approach towards mitigating and detecting such attacks against C++ binary code. We propose a static binary analysis technique to extract virtual function call site information in an automated way. Leveraging this information, we instrument the given binary executable and add runtime policy enforcements to thwart the illegal usage of these call sites. We implemented the proposed techniques in a prototype called T-VIP and successfully hardened three versions of Microsoft's Internet Explorer and Mozilla Firefox. An evaluation with several zero-day exploits demonstrates that our method prevents all of them. Performance benchmarks both on micro and macro level indicate that the overhead is reasonable with about 2.2%, which is only slightly higher compared to recent compiler-based approaches that address this problem.
Keywords: (not provided) (ID#: 15-6221)
URL:  http://doi.acm.org/10.1145/2664243.2664249 


Sean Whalen, Nathaniel Boggs, Salvatore J. Stolfo; “Model Aggregation for Distributed Content Anomaly Detection,” AISec ’14, Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, November 2014, Pages 61-71. doi:10.1145/2666652.2666660
Abstract: Cloud computing offers a scalable, low-cost, and resilient platform for critical applications. Securing these applications against attacks targeting unknown vulnerabilities is an unsolved challenge. Network anomaly detection addresses such zero-day attacks by modeling attributes of attack-free application traffic and raising alerts when new traffic deviates from this model. Content anomaly detection (CAD) is a variant of this approach that models the payloads of such traffic instead of higher level attributes. Zero-day attacks then appear as outliers to properly trained CAD sensors. In the past, CAD was unsuited to cloud environments due to the relative overhead of content inspection and the dynamic routing of content paths to geographically diverse sites. We challenge this notion and introduce new methods for efficiently aggregating content models to enable scalable CAD in dynamically-pathed environments such as the cloud. These methods eliminate the need to exchange raw content, drastically reduce network and CPU overhead, and offer varying levels of content privacy. We perform a comparative analysis of our methods using Random Forest, Logistic Regression, and Bloom Filter-based classifiers for operation in the cloud or other distributed settings such as wireless sensor networks. We find that content model aggregation offers statistically significant improvements over non-aggregate models with minimal overhead, and that distributed and non-distributed CAD have statistically indistinguishable performance. Thus, these methods enable the practical deployment of accurate CAD sensors in a distributed attack detection infrastructure.
Keywords: anomaly detection, machine learning, model aggregation (ID#: 15-6222)
URL:  http://doi.acm.org/10.1145/2666652.2666660 


Sun-il Kim, William Edmonds, Nnamdi Nwanze; “On GPU Accelerated Tuning for a Payload Anomaly-Based Network Intrusion Detection Scheme,” CISR ’14, Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 1-4. doi:10.1145/2602087.2602093
Abstract: In network intrusion detection, anomaly-based solutions complement signature-based solutions in mitigating zero-day attacks, but require extensive training and learning to effectively model what the normal pattern for a given system (or service) looks like. Though the training typically happens off-line, and the processing speed is not as important as the detection stage (which occurs on-line in real-time), continuous analysis and retuning may be attractive depending on the deployment scenarios. The different types of computation required to perform automatic retuning (or retraining) of the system may result in resource competition for other important system tasks. Thus, a mechanism by which the retuning can take place without affecting the actual system workload is important. In this paper, we describe a layered, simple statistics based anomaly detection algorithm with parallel implementation of the training algorithm. We focus on the use of graphic processing units (GPU) to allow cost-efficient implementation with minimal impact on CPU loads so as to minimize affecting the day to day server workloads. Our results show potential for significant performance improvements.
Keywords: intrusion detection, network security, parallel processing (ID#: 15-6223)
URL:  http://doi.acm.org/10.1145/2602087.2602093 


Roopak Venkatakrishnan, Mladen A. Vouk; “Diversity-Based Detection of Security Anomalies,” HotSoS ’14, Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 29. doi:10.1145/2600176.2600205
Abstract: Detecting and preventing attacks before they compromise a system can be done using acceptance testing, redundancy based mechanisms, and using external consistency checking such external monitoring and watchdog processes. Diversity-based adjudication, is a step towards an oracle that uses knowable behavior of a healthy system. That approach, under best circumstances, is able to detect even zero-day attacks. In this approach we use functionally equivalent but in some way diverse components and we compare their output vectors and reactions for a given input vector. This paper discusses practical relevance of this approach in the context of recent web-service attacks.
Keywords: attack detection, diversity, redundancy in security, web services (ID#: 15-6224)
URL: http://doi.acm.org/10.1145/2600176.2600205 


Hao Zhang, Danfeng Daphne Yao, Naren Ramakrishnan; “Detection of Stealthy Malware Activities with Traffic Causality and Scalable Triggering Relation Discovery,” ASIA CCS ’14, Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 39-50. doi:10.1145/2590296.2590309
Abstract: Studies show that a significant portion of networked computers are infected with stealthy malware. Infection allows remote attackers to control, utilize, or spy on victim machines. Conventional signature-scan or counting-based techniques are limited, as they are unable to stop new zero-day exploits. We describe a traffic analysis method that can effectively detect malware activities on a host. Our new approach efficiently discovers the underlying triggering relations of a massive amount of network events. We use these triggering relations to reason the occurrences of network events and to pinpoint stealthy malware activities. We define a new problem of triggering relation discovery of network events. Our solution is based on domain-knowledge guided advanced learning algorithms. Our extensive experimental evaluation involving 6+ GB traffic of various types shows promising results on the accuracy of our triggering relation discovery.
Keywords: anomaly detection, network security, stealthy malware (ID#: 15-6225)
URL:  http://doi.acm.org/10.1145/2590296.2590309 


Yu Feng, Saswat Anand, Isil Dillig, Alex Aiken; “Apposcopy: Semantics-Based Detection of Android Malware Through Static Analysis,”  FSE 2014, Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 576-587. doi:10.1145/2635868.2635869
Abstract: We present Apposcopy, a new semantics-based approach for identifying a prevalent class of Android malware that steals private user information. Apposcopy incorporates (i) a high-level language for specifying signatures that describe semantic characteristics of malware families and (ii) a static analysis for deciding if a given application matches a malware signature. The signature matching algorithm of Apposcopy uses a combination of static taint analysis and a new form of program representation called Inter-Component Call Graph to efficiently detect Android applications that have certain control- and data-flow properties. We have evaluated Apposcopy on a corpus of real-world Android applications and show that it can effectively and reliably pinpoint malicious applications that belong to certain malware families
Keywords: Android, Inter-component Call Graph, Taint Analysis (ID#: 15-6226)
URL:  http://doi.acm.org/10.1145/2635868.2635869 


Steven Noel, Sushil Jajodia; “Metrics Suite for Network Attack Graph Analytics,” CISR ’14, Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 5-8. doi:10.1145/2602087.2602117
Abstract: We describe a suite of metrics for measuring network-wide cyber security risk based on a model of multi-step attack vulnerability (attack graphs). Our metrics are grouped into families, with family-level metrics combined into an overall metric for network vulnerability risk. The Victimization family measures risk in terms of key attributes of risk across all known network vulnerabilities. The Size family is an indication of the relative size of the attack graph. The Containment family measures risk in terms of minimizing vulnerability exposure across protection boundaries. The Topology family measures risk through graph theoretic properties (connectivity, cycles, and depth) of the attack graph. We display these metrics (at the individual, family, and overall levels) in interactive visualizations, showing multiple metrics trends over time.
Keywords: attack graphs, security metrics, topological vulnerability analysis (ID#: 15-6227)
URL:   http://doi.acm.org/10.1145/2602087.2602117 


Tsung Hsuan Ho, Daniel Dean, Xiaohui Gu, William Enck; “PREC: Practical Root Exploit Containment for Android Devices,” CODASPY ’14, Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 187-198. doi:10.1145/2557547.2557563
Abstract: Application markets such as the Google Play Store and the Apple App Store have become the de facto method of distributing software to mobile devices. While official markets dedicate significant resources to detecting malware, state-of-the-art malware detection can be easily circumvented using logic bombs or checks for an emulated environment. We present a Practical Root Exploit Containment (PREC) framework that protects users from such conditional malicious behavior. PREC can dynamically identify system calls from high-risk components (e.g., third-party native libraries) and execute those system calls within isolated threads. Hence, PREC can detect and stop root exploits with high accuracy while imposing low interference to benign applications. We have implemented PREC and evaluated our methodology on 140 most popular benign applications and 10 root exploit malicious applications. Our results show that PREC can successfully detect and stop all the tested malware while reducing the false alarm rates by more than one order of magnitude over traditional malware detection algorithms. PREC is light-weight, which makes it practical for runtime on-device root exploit detection and containment.
Keywords: android, dynamic analysis, host intrusion detection, malware, root exploits (ID#: 15-6228)
URL:  http://doi.acm.org/10.1145/2557547.2557563 


Frederico Araujo, Kevin W. Hamlen, Sebastian Biedermann, Stefan Katzenbeisser; “From Patches to Honey-Patches: Lightweight Attacker Misdirection, Deception, and Disinformation,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 942-953. doi:10.1145/2660267.2660329
Abstract: Traditional software security patches often have the unfortunate side-effect of quickly alerting attackers that their attempts to exploit patched vulnerabilities have failed. Attackers greatly benefit from this information; it expedites their search for unpatched vulnerabilities, it allows them to reserve their ultimate attack payloads for successful attacks, and it increases attacker confidence in stolen secrets or expected sabotage resulting from attacks. To overcome this disadvantage, a methodology is proposed for reformulating a broad class of security patches into honey-patches — patches that offer equivalent security but that frustrate attackers’ ability to determine whether their attacks have succeeded or failed. When an exploit attempt is detected, the honey-patch transparently and efficiently redirects the attacker to an unpatched decoy, where the attack is allowed to succeed. The decoy may host aggressive software monitors that collect important attack information, and deceptive files that disinform attackers. An implementation for three production-level web servers, including Apache HTTP, demonstrates that honey-patching can be realized for large-scale, performance-critical software applications with minimal overheads.
Keywords: honeypots, intrusion detection and prevention (ID#: 15-6229)
URL: http://doi.acm.org/10.1145/2660267.2660329 


Sascha Fahl, Sergej Dechand, Henning Perl, Felix Fischer, Jaromir Smrcek, Matthew Smith; “Hey, NSA: Stay Away from My Market! Future Proofing App Markets against Powerful Attackers,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1143-1155. doi:10.1145/2660267.2660311
Abstract: Mobile devices are evolving as the dominant computing platform and consequently application repositories and app markets are becoming the prevalent paradigm for deploying software. Due to their central and trusted position in the software ecosystem, coerced, hacked or malicious app markets pose a serious threat to user security. Currently, there is little that hinders a nation state adversary (NSA) or other powerful attackers from using such central and trusted points of software distribution to deploy customized (malicious) versions of apps to specific users. Due to intransparencies in the current app installation paradigm, this kind of attack is extremely hard to detect.  In this paper, we evaluate the risks and drawbacks of current app deployment in the face of powerful attackers. We assess the app signing practices of 97% of all free Google Play apps and find that the current practices make targeted attacks unnecessarily easy and almost impossible to detect for users and app developers alike. We show that high profile Android apps employ intransparent and unaccountable strategies when they publish apps to (multiple) alternative markets. We then present and evaluate Application Transparency (AT), a new framework that can defend against ``targeted-and-stealthy'' attacks, mount by malicious markets. We deployed AT in the wild and conducted an extensive field study in which we analyzed app installations on 253,819 real world Android devices that participate in a popular anti-virus app's telemetry program. We find that AT can effectively protect users against malicious targeted attack apps and furthermore adds transparency and accountability to the current intransparent signing and packaging strategies employed by many app developers.
Keywords: android, apps, market, nsa, security, transparency (ID#: 15-6230)
URL: http://doi.acm.org/10.1145/2660267.2660311 


Mingshen Sun, Min Zheng, John C. S. Lui, Xuxian Jiang; “Design and Implementation of an Android Host-based Intrusion Prevention System,” ACSAC ’14, Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 226-235. doi:10.1145/2664243.2664245
Abstract: Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform—“Patronus.” Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption.
Keywords: (not provided) (ID#: 15-6231)
URL:  http://doi.acm.org/10.1145/2664243.2664245 


Thomas Hobson, Hamed Okhravi, David Bigelow, Robert Rudd, William Streilein; “On the Challenges of Effective Movement,” MTD ’14, Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 41-50. doi:10.1145/2663474.2663480
Abstract: Moving Target (MT) defenses have been proposed as a game-changing approach to rebalance the security landscape in favor of the defender. MT techniques make systems less deterministic, less static, and less homogeneous in order to increase the level of effort required to achieve a successful compromise. However, a number of challenges in achieving effective movement lead to weaknesses in MT techniques that can often be used by the attackers to bypass or otherwise nullify the impact of that movement. In this paper, we propose that these challenges can be grouped into three main types: coverage, unpredictability, and timeliness. We provide a description of these challenges and study how they impact prominent MT techniques. We also discuss a number of other considerations faced when designing and deploying MT defenses.
Keywords: cybersecurity challenges, diversity, metrics, moving target, randomization (ID#: 15-6232)
URL:  http://doi.acm.org/10.1145/2663474.2663480 


M. Zubair Rafique, Ping Chen, Christophe Huygens, Wouter Joosen; “Evolutionary Algorithms for Classification of Malware Families Through Different Network Behaviors,” GECCO ’14, Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 1167-1174. doi:10.1145/2576768.2598238
Abstract: The staggering increase of malware families and their diversity poses a significant threat and creates a compelling need for automatic classification techniques. In this paper, we first analyze the role of network behavior as a powerful technique to automatically classify malware families and their polymorphic variants. Afterwards, we present a framework to efficiently classify malware families by modeling their different network behaviors (such as HTTP, SMTP, UDP, and TCP). We propose protocol-aware and state-space modeling schemes to extract features from malware network behaviors. We analyze the applicability of various evolutionary and non-evolutionary algorithms for our malware family classification framework. To evaluate our framework, we collected a real-world dataset of $6,000$ unique and active malware samples belonging to 20 different malware families. We provide a detailed analysis of network behaviors exhibited by these prevalent malware families. The results of our experiments shows that evolutionary algorithms, like sUpervised Classifier System (UCS), can effectively classify malware families through different network behaviors in real-time. To the best of our knowledge, the current work is the first malware classification framework based on evolutionary classifier that uses different network behaviors.
Keywords: machine learning, malware classification, network behaviors (ID#: 15-6233)
URL: http://doi.acm.org/10.1145/2576768.2598238 


Ali Zand, Giovanni Vigna, Xifeng Yan, Christopher Kruegel; “Extracting Probable Command and Control Signatures for Detecting Botnets,” SAC ’14, Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1657-1662. doi:10.1145/2554850.2554896
Abstract: Botnets, which are networks of compromised machines under the control of a single malicious entity, are a serious threat to online security. The fact that botnets, by definition, receive their commands from a single entity can be leveraged to fight them. To this end, one requires techniques that can detect command and control (C&C) traffic, as well as the servers that host C&C services. Given the knowledge of a C&C server’s IP address, one can use this information to detect all hosts that attempt to contact such a server, and subsequently disinfect, disable, or block the infected machines. This information can also be used by law enforcement to take down the C&C server. In this paper, we present a new botnet C&C signature extraction approach that can be used to find C&C communication in traffic generated by executing malware samples in a dynamic analysis system. This approach works in two steps. First, we extract all frequent strings seen in the network traffic. Second, we use a function that assigns a score to each string. This score represents the likelihood that the string is indicative of C&C traffic. This function allows us to rank strings and focus our attention on those that likely represent good C&C signatures. We apply our technique to almost 2.6 million network connections produced by running more than 1.4 million malware samples. Using our technique, we were able to automatically extract a set of signatures that are able to identify C&C traffic. Furthermore, we compared our signatures with those used by existing tools, such as Snort and BotHunter.
Keywords:  (not provided) (ID#: 15-6234)
URL: http://doi.acm.org/10.1145/2554850.2554896 


Sandy Clark, Michael Collis, Matt Blaze, Jonathan M. Smith; “Moving Targets: Security and Rapid-Release in Firefox,” CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1256-1266. doi:10.1145/2660267.2660320
Abstract: Software engineering practices strongly affect the security of the code produced. The increasingly popular Rapid Release Cycle (RRC) development methodology and easy network software distribution have enabled rapid feature introduction. RRC’s defining characteristic of frequent software revisions would seem to conflict with traditional software engineering wisdom regarding code maturity, reliability and reuse, as well as security. Our investigation of the consequences of rapid release comprises a quantitative, data-driven study of the impact of rapid-release methodology on the security of the Mozilla Firefox browser. We correlate reported vulnerabilities in multiple rapid release versions of Firefox code against those in corresponding extended release versions of the same system; using a common software base with different release cycles eliminates many causes other than RRC for the observables. Surprisingly, the resulting data show that Firefox RRC does not result in higher vulnerability rates and, further, that it is exactly the unfamiliar, newly released software (the “moving targets”) that requires time to exploit. These provocative results suggest that a rethinking of the consequences of software engineering practices for security may be warranted.
Keywords: agile programming, honeymoon effect, arms race, rapid release cycle, secure software development models, secure software metrics, software life-cycle, software quality, secure software development, vulnerabilities, windows of vulnerability (ID#: 15-6235)
URL:  http://doi.acm.org/10.1145/2660267.2660320
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Upcoming Events of Interest (2015 - Issue 8)

SoS Logo

Upcoming Events

Mark your calendars!

This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.


World Congress on Internet Security (WorldCIS-2015)
The World Congress on Internet Security (WorldCIS-2015) is Technical Co-sponsored by IEEE UK/RI Computer Chapter. The WorldCIS is an international refereed conference dedicated to the advancement of the theory and practical implementation of security on the Internet and Computer Networks. The inability to properly secure the Internet, computer networks, protecting the Internet against emerging threats and vulnerabilities, and sustaining privacy and trust has been a key focus of research. The WorldCIS aims to provide a highly professional and comparative academic research forum that promotes collaborative excellence between academia and industry.
Date: October 19 - 21
Location: Dublin, Ireland
URL: http://www.worldcis.org/

SOURCE Security Conference Seattle
In addition to our excellent line-up of keynotes and speakers, we have our usual selection of the little things that makes the SOURCE Conferences special. This year we will have speed networking, lightning talks, a career development panel, and an excellent networking reception - all designed in a way that ties the event together.
Date: October 14 - 15
Location: Seattle, Wa
URL: http://www.sourceconference.com/seattle-2015-main

CyberMaryland 2015
Maryland is recognized as a cybersecurity leader - nationally and internationally. The state has developed cybersecurity experts, education and training programs, technology, products, systems and infrastructure. With over 10 million cyber hacks a day resulting in an annual worldwide cost of over $100 billion, the United States is at risk.
Date: October 28 - 29
Location: Baltimore, Md
URL: https://www.fbcinc.com/e/cybermdconference/default.aspx


(ID#:15-7302)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.