Science of Security (SoS) Newsletter (2015 - Issue 3)

Newsletter Banner

Science of Security (SoS) Newsletter (2015 - Issue 3)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

General Topics of Interest

General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

Publications

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents

Science of Security (SoS) Newsletter (2015 - Issue 3)

(ID#:15-4085)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


HotSoS 2015 - Interest in Cybersecurity Science and Research Heats Up

 

 
SoS Logo

Interest in Cybersecurity Science & Research

heats up at HotSoS 2015

Urbana, IL
April 22, 2015

The 2015 Symposium and Bootcamp on the Science of Security (HotSoS) was held April 21-22 at the University of Illinois at Urbana-Champaign National Center for Supercomputing Applications. This third annual conference brought together researchers from numerous disciplines seeking a methodical, rigorous scientific approach to identifying and removing cyber threats. Part of the Science of Security project, the HotSoS goal is to understand how computing systems are designed, built, used, and maintained with an understanding of their security issues and challenges. It seeks not only to put scientific rigor into research, but also to identify the scientific value and underpinnings of cybersecurity.

Dave Nicol, UIUC lead was the affable host of Hot SOS 2015David Nicol, Director of the Illinois Trust Institute and co-PI for the Illinois Science of Security Lablet, was conference chair. Introducing the event, he called for participants to interact and share ideas, thoughts, and questions about the nature of security and the nascent science that is emerging. Kathy Bogner, Intelligence Community Coordinator for Cybersecurity Research, represented the NSA sponsor and welcomed the group, noting the government’s long-term interest and commitment to their work. She challenged them to continue to address cybersecurity using strong scientific principles and methods and to share the fruits of that work. She cited the number of universities and individual collaborators engaged in Science of Security research as an indication of activity and growth in the field. 

Mike Reiter smiles at an audience member’s remarkMike Reiter, Lawrence M. Slifkin Distinguished Professor of Computer Science, University of North Carolina at Chapel Hill, delivered the keynote “Is it Science or Engineering? A Sampling of Recent Research.” He said interest in a "Science of Security" is confusing to many researchers, in part due to a lack of clarity about what this "science" should be like and how it should differ from principled engineering. To help clarify the distinction, he described recent research projects about large-scale measurement, attack development, human-centric design, network defense, and provable cryptography to assess which ones, if any, constitute "science." A lively debate ensued. Pictured at the right, Mike Reiter smiles at an audience member’s remark.

Jonathan Spring, Researcher and Analyst for the CERT Division, Software Engineering Institute, Carnegie Mellon University, spoke on “Avoiding Pseudoscience in the Science of Security.” In his view, we seek the philosophical underpinnings to science of security in an effort to avoid pseudoscience. We look at the philosophy of science to describe how "observation and reasoning from results" differ between computing and other sciences due to the engineered elements under study. He demonstrated the challenges in avoiding pseudoscience and some solutions with a case study of malware analysis.

Prof. McDaniel asks “why don’t we wear amulets to protect against car accidents?” in addressing measurement.Patrick McDaniel, Professor of Computer Science and Director of the Systems and Internet Infrastructure Security Laboratory, Penn State University, addressed “The Importance of Measurement and Decision Making to a Science of Security.” A “science” is based on a reasoned modification to a system or environment in response to a functional, performance, or security need. His talk highlighted activities surrounding the Cyber-Security Collaborative Research Alliance, five universities working in collaboration with the Army Research Lab. Another lively debate ensued. The picture on the left captures Prof. McDaniel asking “Why don’t we wear amulets to protect against car accidents?” in addressing measurement.

Dusko Pavlovic, U. of Hawai’i, was both animated and stimulatingTutorials and a workshop were conducted with concurrent paper presentations. Five tutorials covered social network analysis; human behavior; policy-governed secure collaboration, security-metrics-driven evaluation, design, development and deployment; and resilient architectures. The workshop focused on analyzing papers from the security literature to determine how completely authors describe their research methods. Pictured here is Dusko Pavlovic, U. of Hawai’i, who was both animated and stimulating.

 

 

Allaire Welk, NC State, addresses methods of learning for Signals Intelligence analysts.Mike Reiter smiles at an audience member’s remarkThirteen researchers from the United Kingdom and the United States presented individual papers on studies about signals intelligence analyst tasks, detecting abnormal user behavior, tracing cyber-attack analysis processes, vulnerability prediction models, preemptive intrusion detection, enabling forensics, global malware encounters, workflow resiliency, sanctions, password policies, resource-bounded systems integrity assurance, active cyber defense, and science of trust. Allaire Welk (left picture), NC State, addresses methods of learning for Signals Intelligence analysts. Ignacio X. Dominguez (right), NC State, listens to a question about his work on input device analytics.

The 2013 Best Scientific Cybersecurity Paper was an invited paper. Chang Liu of the University of Maryland presented “Memory Trace: Oblivious Program Execution for Cloud Computing.”

For members of the Science of Security Virtual Organization the agenda and presentations are available on the CPS-VO web site at: http://cps-vo.org/node/3485/browser. For non-members, information is available at: http://cps-vo.org/group/SoS.

Next year’s HotSoS will be held in Pittsburgh and will be hosted by Carnegie Mellon University’s Science of Security Lablet. Prof. William Scherlis will chair the event.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

HotSoS 2015 - Research Presentations

 

 
SoS Logo

HotSoS 2015 Research Presentations

These papers were presented at HotSoS 2015. They covered a range of scientific issues related to the five hard problems of cybersecurity—scalability and composability, measurement, policy-governed secure collaboration, resilient architectures, and human behavior. The individual presentations are described below. They will be published in an upcoming ACM conference publication. The HoTSoS conference page is available at: http://cps-vo.org/group/hotsos

 

“Integrity Assurance in Resource-Bounded Systems through Stochastic Message Authentication”
Aron Laszka, Yevgeniy Vorobeychik, and Xenofon Koutsoukos.

Assuring communication integrity is a central problem in security. The presenters propose a formal game-theoretic framework for optimal stochastic message authentication, providing provable integrity guarantees for resource-bounded systems based on an existing MAC scheme. They use this framework to investigate attacker deterrence, optimal design of stochastic message authentication schemes, and provide experimental results on the computational performance of their framework in practice.

 

“Active Cyber Defense Dynamics Exhibiting Rich Phenomena”
Ren Zheng, Wenlian Lu, and Shouhuai Xu

The authors explore the rich phenomena that can be exhibited when the defender employs active defense to combat cyber attacks. This study shows that active cyber defense dynamics (or more generally, cybersecurity dynamics) can exhibit bifurcation and chaos phenomena that have implications for cyber security measurement and prediction. First, that it is infeasible (or even impossible) to accurately measure and predict cyber security under certain circumstances, and second, that the defender must manipulate the dynamics to avoid unmanageable situations in real-life defense operations.

 

“Towards a Science of Trust”
Dusko Pavlovic

This paper explores the idea that security is not just a suitable subject for science, but that the process of security is also similar to the process of science. This similarity arises from the fact that both science and security depend on the methods of inductive inference. Because of this dependency, a scientific theory can never be definitely proved, but can only be disproved by new evidence and improved into a better theory. Because of the same dependency, every security claim and method has a lifetime, and always eventually needs to be improved.

 

“Challenges with Applying Vulnerability Prediction Models”
Patrick Morrison, Kim Herzig, Brendan Murphy, and Laurie Williams

The authors address vulnerability prediction models (VPM) as a basis for software engineers to prioritize precious verification resources to search for vulnerabilities. The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. They define "actionable" in terms of the inspection effort required to evaluate model results. They conclude VPMs must be refined to achieve actionable performance, possibly through security-specific metrics.

 

“Preemptive Intrusion Detection: Theoretical Framework and Real-World Measurements”
Phuong Cao, Eric Badger, Zbigniew Kalbarczyk, Ravishankar Iyer, and Adam Slagell

This paper presents a framework for highly accurate and preemptive detection of attacks, i.e., before system misuse. Using security logs on real incidents that occurred over a six-year period at the National Center for Supercomputing Applications (NCSA), the authors evaluated their framework. The data consisted of security incidents that were only identified after the fact by security analysts. The framework detected 74 percent of attacks, and the majority them were detected before the system misuse. In addition, six hidden attacks were uncovered that were not detected by intrusion detection systems during the incidents or by security analysts in post-incident forensic analyses.

 

“Enabling Forensics by Proposing Heuristics to Identify Mandatory Log Events”
Jason King, Rahul Pandita, and Laurie Williams

Software engineers often implement logging mechanisms to debug software and diagnose faults. These logging mechanisms need to capture detailed traces of user activity to enable forensics and hold users accountable. Techniques for identifying what events to log are often subjective and produce inconsistent results. This study helps software engineers strengthen forensic-ability and user accountability by systematically identifying mandatory log events through processing of unconstrained natural language software artifacts; and then proposing empirically-derived heuristics to help determine whether an event must be logged.

 

“Modelling User Availability in Workflow Resiliency Analysis”
John C. Mace, Charles Morisset, and Aad van Moorsel

Workflows capture complex operational processes and include security constraints that limit which users can perform which tasks. An improper security policy may prevent certain tasks being assigned and may force a policy violation. Tools are required that allow automatic evaluation of workflow resiliency. Modelling user availability can be done in multiple ways for the same workflow. Finding the correct choice of model is a complex concern with a major impact on the calculated resiliency. The authors describe a number of user availability models and their encoding in the model checker PRISM, used to evaluate resiliency. They also show how model choice can affect resiliency computation in terms of its value, memory, and CPU time.

 

“An Empirical Study of Global Malware Encounters”
Ghita Mezzour, Kathleen M. Carley, and L. Richard Carley

The authors empirically test alternative hypotheses about factors behind international variation in the number of trojans, worm, and virus encounters using Symantec Anti-Virus (AV) telemetry data collected from more than 10 million Symantec global customer computers. They used regression analysis to test for the effect of computing and monetary resources, web browsing behavior, computer piracy, cyber security expertise, and international relations on international variation in malware encounters and found that trojans, worms, and viruses are most prevalent in Sub-Saharan African and Asian countries. The main factor that explains the high malware exposure of these countries is widespread computer piracy, especially when combined with poverty.

 

“An Integrated Computer-Aided Cognitive Task Analysis Method for Tracing Cyber-Attack Analysis Processes”
Chen Zhong, John Yen, Peng Liu, Rob Erbacher, Renee Etoty, and Christopher Garneau

Cyber-attack analysts are required to process large amounts of network data and to reason under uncertainty to detect cyber-attacks. Capturing and studying the fine-grained analysts’ cognitive processes helps researchers gain deep understanding of how they conduct analytical reasoning and elicit their procedure knowledge and experience to further improve their performance. To conduct cognitive task analysis studies in cyber-attack analysis, the authors proposed an integrated computer-aided data collection method for cognitive task analysis (CTA) with three building elements: a trace representation of the fine-grained cyber-attack analysis process, a computer tool supporting process tracing and a laboratory experiment for collecting traces of analysts’ cognitive processes in conducting a cyber-attack analysis task.

 

“All Signals Go: Investigating How Individual Differences Affect Performance on a Medical Diagnosis Task Designed to Parallel a Signals Intelligence Analyst Task”
Allaire K. Welk and Christopher B. Mayhorn

Signals intelligence analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. This study aimed to evaluate how individual differences influence performance in an Internet search-based medical diagnosis task designed to simulate a signals analyst task. Individual differences included working memory capacity and previous experience with elements of the task, prior experience using the Internet, and prior experience conducting Internet searches. Results indicated that working memory significantly predicted performance on this medical diagnosis task and other factors were not significant predictors of performance. These results provide additional evidence that working memory capacity greatly influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence analyst positions.

 

“Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics”
Ignacio X. Domínguez, Alok Goel, David L. Roberts, and Robert St. Amant

This paper presents a method for detecting patterns in the usage of a computer mouse that can give insights into user's cognitive processes. The authors conducted a study using a computer version of the Memory game (also known as the Concentration game) that allowed some participants to reveal the content of the tiles, expecting their low-level mouse interaction patterns to deviate from those of normal players with no access to this information. They then trained models to detect these differences using task-independent input device features. The models detected cheating with 98.73% accuracy for players who cheated or did not cheat consistently for entire rounds of the game, and with 89.18% accuracy for cases in which players enabled and then disabled cheating within rounds.

 

“Understanding Sanction under Variable Observability in a Secure, Collaborative Environment”
Hongying Du, Bennett Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, and Munindar P. Singh

Many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment using a simulation consisting of agents maintaining “compliance” to enforced security norms while remaining “motivated” as researchers. They tested whether delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability, and if sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. They found that only the latter hypothesis is supported.

 

“Measuring the Security Impacts of Password Policies Using Cognitive Behavioral Agent-Based Modeling”
Vijay Kothari, Jim Blythe, Sean W. Smith, and Ross Koppel

Agent-based modeling can serve as a valuable asset to security personnel who wish to better understand the security landscape within their organization, especially as it relates to user behavior and circumvention. The authors argue in favor of cognitive behavioral agent-based modeling for usable security, report on their work on developing an agent-based model for a password management scenario, perform a sensitivity analysis, which provides them with valuable insights into improving security, and provides directions for future work.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

HotSoS 2015 - Tutorials

 

 
SoS Logo

HotSoS 2015 Tutorials

The tutorials described below were presented at HotSoS 2015. The HotSoS conference page is available at: http://cps-vo.org/group/hotsos.

Tutorial 1: “Social Network Analysis for Science of Security,” Kathleen Carley, Carnegie Mellon University

The tutorial provided a brief introduction to the area of network science, covering analytics and visualization. Dr. Carley described the core ideas, most common metrics, critical theories, and an overview of key tools. She drew illustrative examples from three security-related issues: insider threat analysis, resilient organizational designs, and global cyber-security attacks.

Tutorial 2: “Understanding and Accounting for Human Behavior,” Sean Smith, Dartmouth College and Jim Blythe, University of Southern California

Since computers are machines, it's tempting to think of computer security as purely a technical problem. However, computing systems are created, used, and maintained by humans and exist to serve the goals of human and institutional stakeholders. Consequently, effectively addressing the security problem requires understanding this human dimension. The presenters discussed this challenge and the principal research approaches to it.

Tutorial 3: “Policy-Governed Secure Collaboration,” Munindar Singh, North Carolina State University

The envisioned Science of Security can be understood as a systemic body of knowledge with theoretical and empirical underpinnings that inform the engineering of secure information systems. The presentation addressed the underpinnings pertaining to the hard problem of secure collaboration, approaching cybersecurity from a sociotechnical perspective and understanding systems through the interplay of human behavior with technical architecture on the one hand and social architecture on the other. The presentation emphasized the social architecture and modeled it in terms of a formalization based on organizations and normative relationships. Dr. Singh described how norms provide a basis for specifying security requirements at a high level, a basis for accountability, and a semantic basis for trust. He concluded the presentation by providing some directions and challenges for future research, including formalization and empirical study.

Tutorial 4: “Security-Metrics-Driven Evaluation, Design, Development and Deployment”, William Sanders, University of Illinois at Urbana-Champaign

Making sound security decisions when designing, operating, and maintaining a complex system is a challenging task. Analysts need to be able to understand and predict how different factors affect overall system security. During system design, security analysts want to compare the security of multiple proposed system architectures. After a system is deployed, analysts want to determine where security enhancement should be focused by examining how the system is most likely to be successfully penetrated. Additionally, when several security enhancement options are being considered, analysts would like to evaluate the relative merit of each. In each of these scenarios, quantitative security metrics should provide insight on system security and aid security decisions. The tutorial provided a survey of existing quantitative security evaluation techniques and described new work being done at the University of Illinois at Urbana-Champaign in this field.

Tutorial 5: “Resilient Architectures,” Ravishankar Iyer, University of Illinois at Urbana-Champaign

Resilience brings together experts in security, fault tolerance, human factors, and high integrity computing for the design and validation of systems that are expected to continue to deliver critical services in the event of attacks and failures. The tutorial highlighted issues and challenges in designing systems that are resilient to both malicious attacks and accidental failures, provided both cyber and cyber-physical examples, and concluded by addressing the challenges and opportunities from both a theoretical and practical perspective.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Security Related Conferences

SoS Logo

Conferences

The following pages provide highlights on Science of Security related research presented at the following International Conferences:

  • International Conferences: Computer Science and Information Systems (FedCSIS), Warsaw, Poland
  • International Conferences: IEEE Information Theory Workshop, Hobart, Australia
  • International Conferences: IEEE Security and Privacy Workshops, San Jose, California
  • International Conferences: Workshop on Visualization for Cyber Security (VizSec 2014), Paris, France
  • International Conferences: IEEE World Congress on Services, Anchorage, Alaska
  • International Conferences: Information Hiding and Multimedia Security Workshop, Salzburg, Austria
  • International Conferences: Software Security and Reliability (SERE), San Francisco, CA
  • International Conferences: Symposium on Resilient Control Systems (ISRCS), Denver, Colorado

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: Communication, Information & Computing Technology (ICCICT), 2015

 

 
SoS Newsletter Logo

Communication, Information & Computing Technology (ICCICT)
Mumbai, India

 

The International Conference on Communication, Information and Computing Technology (ICCICT) 2015 was held 16-17 Jan, 2015 at Sardar Patel Institute of Technology Mumbai, India.  The ICCICT submissions covered a wide range of topics on antennas, communication engineering, networking, sensor networks, VLSI, embedded systems, speech processing, image processing, cloud computing, software engineering and database management.  Security related papers are cited here.  

 

Lopes, Minal; Sarwade, Nisha, "On the Performance Of Quantum Cryptographic Protocols SARG04 and KMB09," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1, 6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045661 Since the first protocol, proposed by Benette and Brassard in 1984 (BB84), the Quantum Cryptographic(QC) protocols has been studied widely in recent years. It is observed that most of the later QC protocols are variants of BB84 with the intention of addressing one or more problems incurred during the practical implementation of BB84 protocol. Amongst many candidates, SARG04 provides robust performance for the weak coherent pulse implementation of QC. Another follower protocol, KMB09 provides improvement in quantum communication distance between Alice and Bob (the classical communicating parties). Both these protocols are chosen to compete, as they found to be a suitable choice for incorporating QC in existing wireless technology. In this paper we present the performance analysis of these two protocols with respect to protocol efficiency, Quantum Bit Error Rate (QBER) and robustness against eavesdropping.

Keywords: Cryptography; Protocols; Robustness; KMB09; QBER; QKD; Quantum Cryptography; SARG04 (ID#: 15-3911)    

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045661&isnumber=7045627

 

Shoaib, Shaikh; Mahajan, R.C., "Authenticating Using Secret Key In Digital Video Watermarking using 3-level DWT," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,5, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045664 Authenticating watermarking is nothing but inserting a hidden object in order to detect deceitful alteration by hackers. The object may be in terms of a secret key or password etc. There are quite few numbers of authentication methods are available for videos. Resent developers in digital video and internet technology helps the common user to easily produce illegal copies of videos. In order to solve the copyright protection problem and deceitful alteration by hackers of videos, several watermarking schemes have been widely used. Very few authenticating of watermarking schemes have been produced for defining the copyrights of digital video. The process of Digital watermark embeds the data called watermark in digital media like image, video, audio file etc. so that it can be claimed for rights. The paper represents the complete software implementation of 3-Level DWT algorithms and to have more secure data a secret key is used. The secret key is given to watermark image during embedding process and while extracting the watermark image the same secret key is used. To check effectiveness of the watermark video MSE and PSNR parameters are used.

Keywords: Authentication; Computers; Discrete wavelet transforms; PSNR; Robustness; Watermarking; Discrete Wavelet Transform (DWT); MSE; PSNR; Secret Key; Watermark (ID#: 15-3912)    

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045664&isnumber=7045627

 

Jain, Nilakshi; Kalbande, Dhananjay R, "Digital Forensic Framework Using Feedback And Case History Keeper," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045670 Cyber crime investigation is the integration of two technologies named theoretical methodology and second practical tools. First is the theoretical digital forensic methodology that encompasses the steps to investigate the cyber crime. And second technology is the practically development of the digital forensic tool which sequentially and systematically analyze digital devices to extract the evidence to prove the crime. This paper explores the development of digital forensic framework, combine the advantages of past twenty five forensic models and generate a algorithm to create a new digital forensic model. The proposed model provides the following advantages, a standardized method for investigation, the theory of model can be directly convert into tool, a history lookup facility, cost and time minimization, applicable to any type of digital crime investigation.

Keywords: Adaptation models; Computational modeling; Computers; Digital forensics; History; Mathematical model; Digital forensic framework; digital crime ;evidence (ID#: 15-3913)    

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045670&isnumber=7045627

 

Melkundi, Swathi; Chandankhede, Chaitali, "A Robust Technique For Relational Database Watermarking And Verification," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1, 7, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045676 Outsourcing of data is increasing with the rapid growth of internet. There is every possibility that data reaches illegal hands. As a result, there is increase in illegal copying of data, piracy, illegal redistribution, forgery and theft. Watermarking technology is a solution for these challenges. It addresses the ownership problem. It deters illegal copying and protects copyright of data. Watermarking technology mainly involves the process of watermark insertion and watermark extraction. Watermark insertion means embedding an imperceptible watermark in the relational database. In watermark extraction we extract the embedded watermark without the help of original database. In this paper we propose a new watermarking technique, which will watermark both textual and numerical data. Our proposed method also does watermark verification where, the watermark extracted from the database is compared with the original watermark that is known only to the owner of the database. This is accomplished through Levenshtein distance algorithm.

Keywords: Algorithm design and analysis; Computers; Encoding; Partitioning algorithms; Relational databases; Watermarking; Levenshtein Distance algorithm; Relational Database; Watermark Verification; Watermarking (ID#: 15-3914)    

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045676&isnumber=7045627

 

Narayan, Narwade Pradeep; Bonde, S.V.; Doye, D.D., "Offline Signature Verification Using Shape Dissimilarities," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045677 Offline signature verification is a challenging and important form of biometric identification. Other biometric measures don't have variability as that of signatures which poses difficult problem in verification of signatures. In this paper, we explore a novel approach for verification of signatures based on curve matching using shape descriptor and Euclidian distance. In our approach, the measurement of similarities are proceeded by 1)finding correspondences between signatures, we attach shape descriptor (shape context) with Euclidian distance between the sample points of one signature and the sample point of other signature for better results, 2)we estimate aligning transforms by using this correspondences between signatures, 3) classify the signatures using linear discriminant analysis and measures of shape dissimilarity between signatures based on shape context distance, bending energy, registration residual, anisotropic scaling.

Keywords: Computers; Context; Forgery; Histograms; Shape; Biometrics; Offline signature verification; curve matching; deformable shape; image processing; measure of shape dissimilarity; structural saliency (ID#: 15-3915)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045677&isnumber=7045627

 

D'Lima, Nathan; Mittal, Jayashri, "Password Authentication using Keystroke Biometrics," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045681 The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user's behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.

Keywords: Classification algorithms; Error analysis; Europe; Hardware; Monitoring; Support vector machines; Text recognition; Artificial Neural Networks; Authentication; Keystroke Biometrics; Password; Security (ID#: 15-3916)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045681&isnumber=7045627

 

Avasare, Minal Govind; Kelkar, Vishakha Vivek, "Image Encryption Using Chaos Theory," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045687 In open network, it is important to keep sensitive information secure from becoming vulnerable to unauthorized access. Encryption is used to ensure high security for Images. Chaos has been widely used for image encryption for its different features. There are many chaos based encryption techniques. Most of the proposed discrete chaotic cryptographic approaches are based on stream or block cipher schemes. If these two schemes are combined the security level is improved. Novel image encryption is proposed based on combination of pixel shuffling. Chaos is used for expand diffusion & confusion in image. Chaotic maps gives advantages of large key space and high level security. We survey exiting work which uses different techniques for image encryption and analyse them with respect to various parameters. To evaluate security & performance, key space analysis, correlation coefficient, histogram, information entropy, NPCR, UACI etc. will be studied.

Keywords: Chaotic communication; Ciphers; Encryption; Sensitivity; chaos; diffusion; encryption; permutation (ID#: 15-3917)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045687&isnumber=7045627

 

Save, Shraddha; Gala, Mansi; Patil, Surabhi; Kalbande, Dhananjay R., "Applying Human Computer Interaction To Individual Security Using Mobile Application," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045691 Individual Security is a major concern in both developed as well as developing countries. There are various ways in which an individual can assure his/her safety, one of the ways being mobile applications. We compared various security mobile applications and analysed user's opinions about ‘Security through Mobile Applications’ through survey[1]. Studying the problems, constraints and importance of quick communication in emergency situations we came up with few significant modifications for the existing mobile applications. Thinking on the lines of reducing the time of communication, we suggest a voice based activation of the application even when the application is locked. This idea is based on the principles of Human Computer Interaction and enhancing simplicity, usability and accessibility of mobile applications as a security device.

Keywords: Computers; Global Positioning System; Internet; Mobile communication; Mobile handsets; Safety; Security ;GPS; HCI; emergency; location; message; mobile application; safety; voice (ID#: 15-3918)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045691&isnumber=7045627

 

Patil, Sandesh; Talele, Kiran, "Suspicious Movement Detection And Tracking Based On Color Histogram," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1, 6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045698 In automated video surveillance applications, detection of suspicious human behaviour is of great practical importance. However due to random nature of human movements, reliable classification of suspicious human movements can be very difficult. Defining an approach to the problem of automatically tracking people and detecting unusual or suspicious movements in Closed Circuit TV (CCTV) videos is our primary aim. We are proposing a system that works for surveillance systems installed in indoor environments like entrances/exits of buildings, corridors, etc. Our work presents a framework that processes video data obtained from a CCTV camera fixed at a particular location. First, we obtain the foreground objects by using background subtraction. These foreground objects are then classified into people and inanimate objects (luggage). These objects are tracked using a real-time blob matching technique. Using temporal and spatial properties of these blobs, activities are classified using semantics-based approach.

Keywords: Cameras; Computers; Correlation; Histograms; Image color analysis; Monitoring; Training; background subtraction color histogram; image processing; object detection; tracking (ID#: 15-3919)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045698&isnumber=7045627

 

Kekre, H.B.; Sarode, Tanuja; Natu, Shachi, "Robust Watermarking By SVD Of Watermark Embedded In DKT-DCT And DCT Wavelet Column Transform Of Host Image," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045700 Watermarking in wavelet domain and with SVD is popular due to its robustness. In this paper a watermarking technique using DCT wavelet and hybrid DKT-DCT wavelet along with SVD is proposed. Wavelet transform is applied on host and SVD is applied on watermark. Few singular values of watermark are embedded in mid frequency band of host. Scaling of singular values is adaptively done for each channel (Red, green and blue) using the highest transform coefficient from selected mid frequency band and first singular value of corresponding channel of watermark. Singular values of watermark are placed at the index positions of closely matching transform coefficients. This along with the adaptive selection of scaling factor adds to the robustness of watermarking technique. Performance of the proposed technique is evaluated against image processing attacks like cropping, compression using orthogonal transforms, noise addition, histogram equalization and resizing. Performance for DCT wavelet and DKT-DCT wavelet is compared and in many of the attacks DCT wavelet is found to be better than DKT-DCT wavelet.

Keywords: Discrete cosine transforms; Image coding; Noise; Robustness; Watermarking; Wavelet transforms; DCT wavelet; DKT-DCT wavelet; SVD; Watermarking; adaptive scaling factor (ID#: 15-3920)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045700&isnumber=7045627

 

Khuspe, Kalyani; Mane, Vanita, "Robust Image Forgery Localization And Recognition In Copy-Move Using Bag Of Features and SVM," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,5, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045718 Nowadays in the world of advanced digital computer, tampering and implication of digital images can be easily performed by a tyro with a number of accessible advanced image processing softwares like Adobe Photoshop, Corel Draw etc. Therefore it becomes very challenging for end users to distinguish whether the image is novel or forged. In the fields such as forensics, medical imaging, e-commerce, and industrial photography, legitimacy and integrity of digital images is crucial. This motivates the need for detection tools that are transparent to tampering and can reveal whether an image has been counterfeit just by scrutinizing the counterfeit image. Recently many new methods are presented with improved performances of detection. However there is still place to improve this performance further. Some of the proposed state-of-the-art image tamper detection techniques have been selected for proposal. In the training stage, it extracts the keypoints for every training image using the Mirror invariance feature transform (MIFT), a vector quantization technique maps keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multiclass SVM to build the training classifier. In the testing stage, the keypoints are extracted and fed into the cluster model to map them into a bag-of-words vector, which is finally fed into the multiclass SVM training classifier to recognize the tampered region.

Keywords: Computers; Digital images; Feature extraction; Forgery; Support vector machines; Training; Vectors; Bag-of-words; K-means; MIFT; blind image forensics; codebook; copymove image forgery; support vector machine (SVM) (ID#: 15-3921)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045718&isnumber=7045627

 

Rajesh, Shah Avani; Bharadi, Vinayak A.; Jangid, Pravin, "Performance Improvement Of Complex Plane Based Feature Vector For Online Signature Recognition Using Soft Biometric Features," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp. 1, 7, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045719 This paper proposes well-worn approach for verification of Online Signature which is one of the biometric entity. The signature will capture on digitizer the missing points of the signature are calculated from MDDA algorithm. This paper uses a notion to extract the feature vector generation based on CAL-SAL function. To extract the feature vector intermediate transform of row & col will be evaluated and distributed over complex Walsh plane. The mean value for each blocks will be calculated which separates the first and last row & col of mean & density of CAL-SAL components for other transforms. Lastly soft biometric features are added to improve the performance. The results for the unimodal & Multi Algorithmic features vectors are compared. Performance Index & Security Performance Index will be evaluated which delivers the performance of the system.

Keywords: Algorithm design and analysis; Computers; Feature extraction; Performance analysis; Vectors; Wavelet transforms; CAL & SAL functions; Complex Plane; Multialgorhmic Features; Unimoal Features (ID#: 15-3922)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045719&isnumber=7045627

 

Saudagar, Abdul Khader Jilani; Mohammed, Habeeb Vulla; Iqbal, Kamran; Gyani, Yasir Javed, "Efficient Arabic Text Extraction And Recognition Using Thinning And Dataset Comparison Technique," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on,pp. 1,5, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045725 The objective of this research paper is to propose a novel technique for Arabic text extraction and recognition which is a part of research work aimed at developing a system for moving Arabic video text extraction for efficient content based indexing and searching. Numerous techniques were proposed in the past for text extraction but very few of them focus on Arabic text. All the earlier proposed implementations are not successful in attaining 100 % accuracy in text extraction and recognition process. The proposed technique is new and is based on thinning the given sample image containing Arabic text and splitting the resulting image horizontally (X-axis direction) from right to left in equal intervals. Compare each part of the image for equal number of white pixels to those of samples in the dataset. Upon matching, with the help of index value the corresponding character is stored in an array. This process is repeated by varying the splitting interval until all the characters in the sample image are recognized. To our knowledge, our research is the primary to address the above problem and propose a solution with increased retrieval accuracy and reduced computation time for Arabic text extraction and recognition.

Keywords: Character recognition; Data mining; Feature extraction; Image color analysis; Image edge detection; Indexes; Text recognition; Arabic Text Extraction; Arabic Text Recognition; Indexing; Searching; Traversing (ID#: 15-3923)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045725&isnumber=7045627

 

Thepade, Sudeep D.; Bhondave, Rupali K., "Bimodal biometric identification with Palmprint and Iris traits using fractional coefficients of Walsh, Haar and Kekre transforms," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,4, 15-17 Jan. 2015

doi: 10.1109/ICCICT.2015.7045729 Biometric identification verifies user identity by comparing an encoded value with a stored value of the concerned biometric characteristic. Multimodal person authentication system is more effective and more challenging. The fusion of multiple biometric traits helps to minimize the system error rate. The benefit of energy compaction of transforms in higher coefficients is taken here to reduce the feature vector size of image by taking fractional coefficients of transformed image. Smaller feature vector size results as less time for comparison of feature vectors resulting in faster identification. Iris and Palmprint are together taken here for bimodal biometric identification with fractional energy of Kekre, Walsh and Haar transformed Palm and Iris images. The test beds of 60 pairs of Iris and Palmprint samples of 10 persons (6 per person of iris as well asPalmprint) are used as test bed for experimentation. Experimental result that the show fractional coefficients perform better as indicatedby higher GAR values over consideration of 100% coefficients. In Walsh and haar transforms the bimodal identification of iris and Palmprint could not perform better than individual consideration of alone Palmprint but perform better than Iris. In Kekre transform bimodal with Palmprint and Iris has shown improvement in performance.

Keywords: Computers; Databases; Educational institutions; Feature extraction; Iris recognition; Transforms; Vectors; Feature Vector; GAR; Haar transform; Kekre transform; Walsh transform (ID#: 15-3924)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045729&isnumber=7045627

 

haikh, Ansar Ahemad; Vani, Nilesh S, "An Extended Approach For Securing The Short Messaging Services Of GSM Using Multi-Threading Elliptical Curve Cryptography," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1, 5, 15-17 Jan. 2015.doi: 10.1109/ICCICT.2015.7045733 Currently, mobile phones are not only used for formal communication but also for sending and receiving sensitive information. Short Messaging Service's(SMS) are one of the popular ways of communication. Sending an SMS is easy, quick and inexpensive. However, when information is exchanged using SMS, protecting the short messages from the known attacks like man in the middle attack, reply attack and non-repudiation attack is very difficult. Such attacks on SMS security are hindering their application. This paper evaluates performance of elliptical curve cryptography by implementing it with threading and without threading. Based on the results of evaluation the one with the best performance is chosen for end to end SMS security.

Keywords: Computers; Elliptic curve cryptography; Elliptic curves; Message systems; Mobile communication; Elliptical curve cryptography; Public key cryptosystem; SMS Security (ID#: 15-3925)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045733&isnumber=7045627

 

Dhage, Sudhir; Kundra, Pranav; Kanchan, Anish; Kap, Pratiksha, "Mobile Authentication Using Keystroke Dynamics," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,5, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045746 As mobiles become a ubiquitous, they are used more and more for operations that may involve sensitive data or huge amounts of money. The mobile is also increasingly being used as the tool of preference for browsing and using the Internet. Hence, new security measures must be developed to support increased functionality of these devices, to protect users in the case of any mishap. Authentication using keystroke dynamics for mobiles is an area that has received attention recently. In this paper, we have studied some authentication techniques used earlier including relative and absolute distance measures, mean and standard deviation based methods and feature fusion methods. We provide a method for authentication using fusion techniques for our own novel mean and standard deviation based approaches, which gives low error rates.

Keywords: Authentication; Computers; Feature extraction; Heuristic algorithms; Mobile communication; Standards; Biometric Security; Keystroke Dynamics; Mobile Authentication (ID#: 15-3926)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045746&isnumber=7045627

 

Morey, Anita K.; Morey, Kailash H., "Remote File Search Using SMS," Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, pp.1,6, 15-17 Jan. 2015. doi: 10.1109/ICCICT.2015.7045753 Due to exponential increase in the information transfer or communication using messages; Short Message Service (SMS) has become important. The efficient and easy to use techniques are being developed for SMS. In the last few years, SMS has made a big impact on the way of communication. Instead of communicating over the phone using voice, people prefer SMS for messaging as well as for information exchange. This paper proposes a method of implementing an extendable generic application which is used to search a file on remote desktop and mail it to user. Mobile users send required information through a SMS to a mobile gateway which then forwards it to the generic application. Using the information send by user, such file name, folder name or drive, email address of the user, the generic application automatically searches the file asked by the user on remote machine and mails it to the user on his email address.

Keywords: Computers; Electronic mail; GSM; Ground penetrating radar; Mobile communication; Mobile handsets; Modems; AT commands; Desktop Search; GPRS; GSM; Google; Quick search Yahoo!; SMS (ID#: 15-3927)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045753&isnumber=7045627


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Conference on Networking Systems & Security (NSysS), Dhaka, Bangladesh

 

 
SoS Newsletter Logo

International Conference on Networking Systems and Security (NSysS)
Dhaka, Bangladesh

 

International Conferences: International Conference on Networking Systems and Security (NSysS) 2015

 

The 2015 International Conference on Networking Systems and Security (NSysS) was held 5-7 January, 2015 in Dhaka, Bangladesh. The program covered research ideas and results in networking systems and security. The topics of discussion included recent advances in theoretical and experimental research addressing  computer networks, networking systems, and security across academia and industry.  

 

Ahmad, Sahan; Alam, Kazi Md.Rokibul; Rahman, Habibur; Tamura, Shinsuke, "A Comparison Between Symmetric And Asymmetric Key Encryption Algorithm Based Decryption Mixnets," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,5, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043532 This paper presents a comparison between symmetric and asymmetric key encryption algorithm based decryption mixnets through simulation. Mix-servers involved in a decryption mixnet receive independently and repeatedly encrypted messages as their input, then successively decrypt and shuffle them to generate a new altered output from which finally the messages are regained. Thus mixnets confirm unlinkability and anonymity between senders and the receiver of messages. Both symmetric (e.g. onetime pad, AES) and asymmetric (e.g. RSA and ElGamal cryptosystems) key encryption algorithms can be exploited to accomplish decryption mixnets. This paper evaluates both symmetric (e.g. ESEBM: enhanced symmetric key encryption based mixnet) and asymmetric (e.g. RSA and ElGamal based) key encryption algorithm based decryption mixnets. Here they are evaluated based on several criteria such as: the number of messages traversing through the mixnet, the number of mix-servers involved in the mixnet and the key length of the underlying cryptosystem. Finally mixnets are compared on the basis of the computation time requirement for the above mentioned criteria while sending messages anonymously.

Keywords: Algorithm design and analysis; Encryption; Generators; Public key; Receivers; Servers; Anonymity; ElGamal; Mixnet; Privacy; Protocol; RSA; Symmetric key encryption algorithm (ID#: 15-3895) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043532&isnumber=7042935

 

Zaman, Mehedee; Siddiqui, Tazrian; Amin, Mohammad Rakib; Hossain, Md.Shohrab, "Malware Detection in Android by Network Traffic Analysis," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 5, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043530 A common behavior of mobile malware is transferring sensitive information of the cell phone user to malicious remote servers. In this paper, we describe and demonstrate in full detail, a method for detecting malware based on this behavior. For this, we first create an App-URL table that logs all attempts made by all applications to communicate with remote servers. Each entry in this log preserves the application id and the URI that the application contacted. From this log, with the help of a reliable and comprehensive domain blacklist, we can detect rogue applications that communicate with malicious domains. We further propose a behavioral analysis method using syscall tracing. Our work can be integrated with be behavioral analysis to build an intelligent malware detection model.

Keywords: Androids; Humanoid robots; Malware; Mobile communication; Ports (Computers);Servers; Uniform resource locators; ADB; Android; Busybox; malware detection; netstat; pcap (ID#: 15-3896) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043530&isnumber=7042935

 

Khan, Fahim Hasan; Ali, Mohammed Eunus; Dev, Himel, "A Hierarchical Approach For Identifying User Activity Patterns From Mobile Phone Call Detail Records," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043535 With the increasing use of mobile devices, now it is possible to collect different data about the day-to-day activities of personal life of the user. Call Detail Record (CDR) is the available dataset at large-scale, as they are already constantly collected by the mobile operator mostly for billing purpose. By examining this data it is possible to analyze the activities of the people in urban areas and discover the human behavioral patterns of their daily life. These datasets can be used for many applications that vary from urban and transportation planning to predictive analytics of human behavior. In our research work, we have proposed a hierarchical analytical model where this CDR Dataset is used to find facts on the daily life activities of urban users in multiple layers. In our model, only the raw CDR data are used as the input in the initial layer and the outputs from each consecutive layer is used as new input combined with the original CDR data in the next layers to find more detailed facts, e.g., traffic density in different areas in working days and holidays. So, the output in each layer is dependent on the results of the previous layers. This model utilized the CDR Dataset of one month collected from the Dhaka city, which is one of the most densely populated cities of the world. So, our main focus of this research work is to explore the usability of these types of dataset for innovative applications, such as urban planning, traffic monitoring and prediction, in a fashion more appropriate for densely populated areas of developing countries.

Keywords: Analytical models; Cities and towns; Data models; Employment; Mobile handsets; Poles and towers; Transportation (ID#: 15-3897) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043535&isnumber=7042935

 

Ahmed, Shamir; Rizvi, A.S.M; Mansur, Rifat Sabbir; Amin, Md.Rafatul; Islam, A.B.M.Alim Al, "User Identification Through Usage Analysis Of Electronic Devices," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043518 Different aspects of usage of electronic devices significantly vary person to person, and therefore, rigorous usage analysis exhibits its prospect in identifying a user in road to secure the devices. Different state-of-the-art approaches have investigated different aspects of the usage, such as typing speed and dwelling time, in isolation for identifying a user. However, investigation of multiple aspects of the usage in combination is yet to be focused in the literature. Therefore, this paper, we investigate multiple aspects of usage in combination to identify a user. We perform the investigation over real users through letting them interact with an Android application, which we develop specifically for the investigation. Our investigation reveals a key finding considering multiple aspects of usage in combination provides improved performance in identifying a user. We get this improved performance up to a certain number of aspects of usage being considered in the identification task.

Keywords: Clustering algorithms; Measurement; Mobile handsets; Presses; Pressing; Security; Standards (ID#: 15-3898) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043518&isnumber=7042935

 

Akter, Mahmuda; Rahman, Md.Obaidur; Islam, Md.Nazrul; Habib, Md.Ahsan, "Incremental Clustering-Based Object Tracking In Wireless Sensor Networks," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043534 Emerging significance of moving object tracking has been actively pursued in the Wireless Sensor Network (WSN) community for the past decade. As a consequence, a number of methods from different angle of assessment have been developed while relatively satisfying performance. Amongst those, clustering based object tracking has shown significant results, which in term provides the network to be scalable and energy efficient for large-scale WSNs. As of now, static cluster based object tracking is the most common approach for large-scale WSN. However, as static clusters are restricted to share information globally, tracking can be lost at the boundary region of static clusters. In this paper, an Incremental Clustering Algorithm is proposed in conjunction with Static Clustering Technique to track an object consistently throughout the network solving boundary problem. The proposed research follows a Gaussian Adaptive Resonance Theory (GART) based Incremental Clustering that creates and updates clusters incrementally to incorporate incessant motion pattern without defiling the previously learned clusters. The objective of this research is to continue tracking at the boundary region in an energy-efficient way as well as to ensure robust and consistent object tracking throughout the network. The network lifetime performance metric has shown significant improvements for Incremental Static Clustering at the boundary regions than that of existing clustering techniques.

Keywords: Algorithm design and analysis; Clustering algorithms; Energy efficiency; Heuristic algorithms; Object tracking; Wireless sensor networks; Adaptive Resonance Theory; Energy-efficiency; Incremental Clustering; Object Tracking; Wireless Sensor Networks (WSN) (ID#: 15-3899) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043534&isnumber=7042935

 

izvi, A.S.M; Ahmed, Shamir; Bashir, Minhajul; Uddin, Md Yusuf Sarwar, "MediaServ: Resource Optimization In Subscription Based Media Crowdsourcing," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,5, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043527 In this paper we propose resource optimization for subscription based media content crowdsourcing. In this form of crowdsourcing, interested entities (we refer to them as Campaigners) announce their ‘interests’ expressing what media content (such as pictures, audio, and videos) they want to receive from participant users whereas mobile users subscribe to those interests as an intention to serve content satisfying the respective interests. Campaigners solicit content generated by users by mentioning explicit criteria that the media content should satisfy, for example a ‘noise pollution’ campaigner who wants to measure noise level of a city neighborhood, may ask potential users for audio clips recorded at a certain location at peak hours of weekdays. Subscribed users voluntarily or on paid terms generate content against those interests. Given that a user may subscribe to different campaign interests and its generated content may satisfy different interests in varying degree of accuracy, we propose methods to evaluate contents based on the degree of satisfaction against the subscribed interests, and then develop techniques for delivering those contents to the campaign end points so that it optimizes the user's resource utilization, such as energy and bandwidth.

Keywords: Cities and towns; Crowdsourcing; Media; Mobile communication; Sensors; Subscriptions (ID#: 15-3900) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043527&isnumber=7042935

 

Nurain, Novia; Mostakim, Moin; Islam, A.B.M.Alim Al, "Towards Empirical Study Based Mathematical Modeling For Throughput of MANETs," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043524 Mathematical modeling for throughput of MANETs considering the impact of different layers in the protocol stack in addition to that of different network parameters remains unexplored till now even though such modeling is considered as the fastest and the most cost-effective tool for evaluating the performance of a network. Therefore, in this paper, we attempt to develop a mathematical model for throughput of MANETs considering both of the aspects. In addition, we also focus on developing mathematical models for delivery ratio and drop ratio, these metrics limit the maximum throughput of a network. In our analysis, we perform rigorous simulation utilizing ns-2 to capture the performance of MANETs under diversified settings. Our rigorous empirical study reveals that we need to develop cross-layer mathematical models for throughput, delivery ratio, and drop ratio to represent the performance of MANETs and such mathematical models need to resolve higher-order polynomial equations. Consequently, our study uncovers a key finding that mathematical modeling of MANETs considering variation in all parameters is not feasible.

Keywords: Ad hoc networks; Fluctuations; Market research; Mathematical model; Measurement; Mobile computing; Throughput; MANET; Mathematical modeling; ns-2 (ID#: 15-3901) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043524&isnumber=7042935

 

Kabir, Tanjida; Nurain, Novia; Kabir, Md.Humayun, "Pro-AODV (Proactive AODV): Simple Modifications To AODV For Proactively Minimizing Congestion in VANETs," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043521 Vehicular Ad Hoc Networks (VANET) are key to realizing Intelligent Transportation Systems (ITS). Although VANETs belong to the class of Mobile Ad Hoc Networks (MANET), and there are numerous routing protocols for MANETs, none of these protocols are applicable for VANETs. In particular, VANETs are highly dynamic due to high speed mobility of vehicles and traditional routing algorithms for MANETs cannot deal with such dynamicity of network nodes. Several comparative studies have suggested AODV (Ad hoc On-Demand Distance Vector), a well known MANET protocol that is adaptive to dynamic changes in network and makes efficient utilization of network resources, to be the best candidate for dealing with VANETs. However, verbatim adoption of AODV is not an efficient routing solution for VANETs. Recent works therefore proposed various modifications and/or additions to AODV to make it suitable for VANETs. It is particularly important to control congestion in VANETs by efficiently dealing with the AODV "Route Request" (RREQ) packets. In this paper, we propose Pro-AODV (Proactive AODV), a protocol that uses information from the AODV routing table to minimize congestion in VANETs, yet sustains other performance metrics at acceptable levels. The novelty and elegance in Pro-AODV comes from the fact that it does not require the execution of any additional logic, it is sufficient to know only the size of the routing table at each node.

Keywords: Delays; Probabilistic logic; Routing; Routing protocols; Vehicles; Vehicular ad hoc networks; AODV; VANET; congestion; routing protocol; routing table (ID#: 15-3902) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043521&isnumber=7042935

 

Ferdous, S.M.; Rahman, M.Sohel, "A Metaheuristic Approach For Application Partitioning In Mobile System," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1, 6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043520 Mobile devices such as smartphones are extremely popular now. In spite of their huge popularity, the computational ability of mobile devices is still low. Computational offloading is a way to transfer some of the heavy computational tasks to server(cloud) so that the efficiency and usability of the system increases. In this paper, we have developed a metaheuristic approach for application partitioning to maximize throughput and performance. Preliminary experiments suggest that our approach is better than the traditional all cloud and all mobile approach.

Keywords: Computers; Mobile communication; Mobile computing; Mobile handsets; Partitioning algorithms; Servers; Throughput (ID#: 15-3903) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043520&isnumber=7042935

 

Sen, Ayon; Islam, A.S.M.Sohidull; Uddin, Md Yusuf Sarwar, "MARQUES: Distributed Multi-Attribute Range Query Solution Using Space Filling Curve on DTHs," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 9, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043516 This paper proposes a distributed peer-to-peer data lookup technique on DHTs in order to serve range queries over multiple attributes. The scheme, MARQUES, uses space filling curves to map multi-attribute data points to a one-dimensional key space and thus effectively converts multi-attribute range queries into a consecutive series of one-dimensional keys. These keys are then used to place or lookup data objects over a DHT. Space filling curves preserve locality of attribute values and thus helps greatly in facilitating range queries in terms of the number of nodes to be searched to serve a given range query. MARQUES, although can be instrumented with any space filling curve, has been implemented with two curves, namely Z-order curve and Hilbert curve, and uses a multi-level variant of Chord, a popular DHT, as its underlying overlay. Simulation results on OMNET++ show that MARQUES successfully answers range queries with significant efficiency in terms of message overhead and query latency.

Keywords: Computer science; Distributed databases; Educational institutions; Indexes; Peer-to-peer computing; Protocols; Routing (ID#: 15-3904) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043516&isnumber=7042935

 

Yanhaona, Muhammad N.; Prodhan, Md Anindya T.; Grimshaw, Andrew S., "An Agent-Based Distributed Monitoring Framework (Extended abstract)," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,10, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043515 In compute clusters, monitoring of infrastructure and application components is essential for performance assessment, failure detection, problem forecasting, better resource allocation, and several other reasons. Present day trends towards larger and more heterogeneous clusters, rise of virtual data-centers, and greater variability of usage suggest that we have to rethink how we do monitoring. We need solutions that will remain scalable in the face of unforeseen expansions, can work in a wide-range of environments, and be adaptable to changes of requirements. We have developed an agent-based framework for constructing such monitoring solutions. Our framework deals with all scalability and flexibility issues associated with monitoring and leaves only the use-case specific task of data generation to the specific solution. This separation of concerns provides a versatile design that enables a single monitoring solution to work in a range of environments; and, at the same time, enables a range of monitoring solutions exhibiting different behaviors to be constructed by varying the tunable parameters of the framework. This paper presents the design, implementation, and evaluation of our novel framework.

Keywords: Fault tolerance; Heart beat; Monitoring; Quality of service; Receivers; Routing; Scalability; autonomous systems; cluster monitoring; distributed systems; flexibility; scalability (ID#: 15-3905) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043515&isnumber=7042935

 

Sadat, Md.Nazmus; Mohiuddin, Muhammad Tasnim; Uddin, Md.Yusuf Sarwar, "On Bounded Message Replication In Delay Tolerant Networks," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,10, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7042952 Delay tolerant networks (DTN), are wireless networks in which at any given time instance, the probability that there is an end-to-end path from a source to a destination is low. So, the conventional solutions do not generally work in DTNs because they assume that the network is stable most of the time and failures of links between nodes are infrequent. Therefore, store-carry-and-forward paradigm is used in routing of messages in DTNs. To deal with DTNs, researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. For this reason, a family of multi-copy protocols called Spray routing, was proposed which can achieve both good delays and low transmissions. Spray routing algorithms generate only a small, carefully chosen number of copies to ensure that the total number of transmissions is small and controlled. Spray and Wait sprays a number of copies into the network, and then waits till one of these nodes meets the destination. In this paper, we propose a set of spraying heuristics that dictates how replicas are shared among nodes. These heuristics are based on delivery probabilities derived from contact histories.

Keywords: Binary trees; Delays; History; Probabilistic logic; Routing; Routing protocols; Spraying; Delay tolerant network; Spray and Wait; routing protocol (ID#: 15-3906) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042952&isnumber=7042935

 

Nur, Fernaz Narin; Sharmin, Seiina; Razzaque, Md.Abdur; Islam, Md.Shariful, "A Duty Cycle Directional MAC Protocol For Wireless Sensor Networks," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,9, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7042950 The directional transmission and reception of data packets in sensor networks minimize the interference and thereby increase the network throughput, and thus the Directional Sensor Networks (DSN) are getting popularity. However, the use of directional antenna has introduced new problems in designing the medium access control (MAC) protocol in DSNs including the synchonizaiton of antenna direction of a pair of sender-receiver. In this paper, we have developed a duty cycle MAC protocol for DSNs, namely DCD-MAC, that synchronizes each pair of parent-child nodes and schedules their transmissions in such a way that transmission from child nodes minimizes the collision and the nodes are awake only when they have transmission-reception activities. The proposed DCD-MAC is fully distributed and it exploits only localized information to ensure weighted share of the transmission slots among the child nodes. We perform extensive simulations to study the performances of DCD-MAC and the results show that our protocol outperforms a state-of-the-art directional MAC protocol in terms of throughput and network lifetime.

Keywords: Data transfer; Directional antennas; Media Access Protocol; Resource management; Synchronization; Wireless sensor networks (ID#: 15-3907) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042950&isnumber=7042935

 

Sharmin, Selina; Nur, Fernaz Narin; Razzaque, Md.Abdur; Rahman, Md.Mustafizur, "Network Lifetime Aware Area Coverage For Clustered Directional Sensor Networks," Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1, 9, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7042949 The problem of field or area coverage in Directional Sensor Networks (DSNs) presents huge research challenges including appropriate selection of sensors with their active sensing directions in an energy-efficient way. Existing solutions permit to execute coverage enhancement algorithms in each individual sensor nodes, leading to high communication and computation overheads, loss of energy and reduced accuracy. In this paper, we have proposed a novel network lifetime aware area coverage solution, NLAC, for a clustered DSN, where distributed cluster heads (CHs) have the responsibility of determining the number of active member nodes and their sensing directions. The CHs minimizes the overlapping coverage area and energy consumption by switching more nodes in sleep state. The proposed NLAC system is fully distributed and it exploits single-hop neighborhood information only. Results from extensive simulations, show that NLAC system offers better performance in terms of covering area and network life.

Keywords: Area measurement; Clustering algorithms; Computer science; Educational institutions; Electronic mail; Sensors} (ID#: 15-3908) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042949&isnumber=7042935

 

Zohra, Fatema Tuz; Rahman, Ashikur, "Mathematical Analysis Of Self-Pruning And A New Dynamic Probabilistic Broadcast for MANETs," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,9, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7042947 Self-pruning broadcasting algorithm exploits neighbor knowledge to reduce redundant retransmissions in mobile ad hoc wireless networks (MANETs). Although in self-pruning, only a subset of nodes forward the message based on certain forwarding rule, it belongs to one of the reliable broadcasting algorithm category where a broadcast message is guaranteed (at least algorithmically) to reach all the nodes in the network. In this paper, we develop an analytical model to determine expected number of forwarding nodes required to complete a broadcast in self-pruning algorithm. The derived expression is a function of various network parameters (such as, network density and distance between nodes) and radio transceiver parameters (such as transmission range). Moreover, the developed mathematical expression provides us a better understanding of the highly complex packet forwarding pattern of self-pruning algorithm and valuable insight to design a new broadcasting heuristic. The proposed new heuristic is a dynamic probabilistic broadcast where rebroadcast probability of each node is dynamically determined from a developed mathematical expression. Extensive simulation experiments have been conducted to validate the accuracy of the analytical model, as well as, to evaluate the efficiency of the proposed heuristic. Performance analysis shows that the proposed heuristic outperforms the static probabilistic broadcasting algorithm and an existing solution proposed by Bahadili.

Keywords: Ad hoc networks; Broadcasting;  Equations; Heuristic algorithms; Mathematical model; Probabilistic logic; Protocols (ID#: 15-3909) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042947&isnumber=7042935

 

Sayeed, Suri Dipannita; Hasan, Md.Sajid; Rahman, Md.Saidur, "Measuring Topological Robustness Of Scale-Free Networks Using Biconnected Components," Networking Systems and Security (NSysS), 2015 International Conference on, pp.1,6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7042945 Models of complex networks are dependent on various properties of networks like connectivity, accessibility, efficiency, robustness, degree distribution etc. Network robustness is a parameter that reflects attack tolerance of a network in terms of connectivity. In this paper we have tried to measure the robustness of a network in such a way that gives a better idea of both stability and reliability of a network. In some previous works, the existence of a giant connected component is considered as an indicator of structural robustness of the entire system. In this paper we show that the size of a largest biconnected component can be a better parameter for measurement of robustness of a complex network. Our experimental study exhibits that scale-free networks are more vulnerable to sustained targeted attacks and more resilient to random failures.

Keywords: Artificial neural networks; Bridges; Complex networks; Graph theory; Robustness; Size measurement (ID#: 15-3910) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042945&isnumber=7042935


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

International Conferences: Cryptography and Security in Computing Systems, Amsterdam, 2015

 

 
SoS Newsletter Logo

Cryptography and Security in Computing Systems
Amsterdam

 

The Second Workshop on Cryptography and Security in Computing Systems was held in conjunction with the HiPEAC 2015 Conference, Amsterdam on 19 January 2015. Topics of interest included compiler and runtime support for security, cryptography in embedded and reconfigurable systems design automation and verification of security, efficient cryptography through multi/many core systems, fault attacks and countermeasures, including interaction with fault tolerance, passive side channel attacks and countermeasures, hardware architecture and extensions for cryptography, hardware/software security techniques, hardware Trojans and reverse engineering, physical unclonable functions, privacy in embedded systems, security of embedded and cyber-physical systems, security of networks-on-chips and multi-core architectures, and trusted computing. 

 

Wei He, Alexander Herrmann; Placement Security Analysis for Side-Channel Resistant Dual-Rail Scheme in FPGA;  CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 39. Doi: 10.1145/2694805.2694813

Abstract: Physical implementations have significant impacts to the security level of hardware cryptography, mainly due to the fact that the bottom-layer logic fundamentals typically act as the exploitable SCA leakage sources. As a widely studied countermeasure category, dual-rail precharged logic theoretically withstands side-channel analysis by compensating the data-dependent variations between two rails. In this paper, different placement schemes, considering dual-rail framework in Xilinx FPGA, are investigated concerning silicon process variations. The presented work is based on the practical implementation of a light-weight crypto coprocessor. Stochastic Approach [9] based SNR estimation is used as a metric to quantify the measurable leakage, over a series of EM traces acquired by surface scanning over a decapsulated Virtex-5 device. Experimental results show that by employing a highly interleaved and identical dual-rail style in diagonal direction, the routing symmetry can be further optimized. This improvement results in less influence from process variation between the dual rails, which in turn yields a higher security grade in terms of signal-to-noise ratio.

Keywords: Dual-rail Precharge Logic, EM Surface Scan, FPGA, Side-Channel Analysis, Signal-to-Noise Ratio (SNR), Stochastic Approach (ID#: 15-3928)

URL:   http://doi.acm.org/10.1145/2694805.2694813

 

Alexander Herrmann, Marc Stöttinger;   Evaluation Tools for Multivariate Side-Channel Analysis; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 1. Doi: 10.1145/2694805.2694806

Abstract: The goal of side-channel evaluation is to estimate the vulnerability of an implementation against the most powerful attacks. In this paper, we present a closed equation for the success rate computation in a profiling-based side-channel analysis scenario. From this equation, we derive a metric that can be used for optimizing the attack scenario by finding the best set of considered points in time. Practical experiments demonstrate the advantages of this new method against other previously used feature selection algorithms.

Keywords:  Feature Selection, Multivariate Side-Channel Analysis (ID#: 15-3929)

URL: http://doi.acm.org/10.1145/2694805.2694806

 

Rainer Plaga, Dominik Merli;  A new Definition and Classification of Physical Unclonable Functions; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages  7. Doi: 10.1145/2694805.2694807

Abstract: A new definition of "Physical Unclonable Functions" (PUFs), the first one that fully captures its intuitive idea among experts, is presented. A PUF is an information-storage system with a security mechanism that is 1. meant to impede the duplication of a precisely described storage-functionality in another, separate system and 2. remains effective against an attacker with temporary access to the whole original system.  A novel classification scheme of the security objectives and mechanisms of PUFs is proposed and its usefulness to aid future research and security evaluation is demonstrated. One class of PUF security mechanisms that prevents an attacker to apply all addresses at which secrets are stored in the information-storage system, is shown to be closely analogous to cryptographic encryption. Its development marks the dawn of a new fundamental primitive of hardware-security engineering: cryptostorage. These results firmly establish PUFs as a fundamental concept of hardware security.

Keywords: ACM proceedings, Physical Unclonable Functions (ID#: 15-3930)

URL: http://doi.acm.org/10.1145/2694805.2694807

 

 

Harris E. Michail, Lenos Ioannou, Artemios G. Voyiatzis; Pipelined SHA-3 Implementations on FPGA: Architecture and Performance Analysis; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 13. Doi: 10.1145/2694805.2694808

Abstract: Efficient and high-throughput designs of hash functions will be in great demand in the next few years, given that every IPv6 data packet is expected to be handled with some kind of security features.  In this paper, pipelined implementations of the new SHA-3 hash standard on FPGAs are presented and compared aiming to map the design space and the choice of the number of pipeline stages. The proposed designs support all the four SHA-3 modes of operation. They also support processing of multiple messages each comprising multiple blocks. Designs for up to a four-stage pipeline are presented for three generations of FPGAs and the performance of the implementations is analyzed and compared in terms of the throughput/area metric.  Several pipeline designs are explored in order to determine the one that achieves the best throughput/area performance. The results indicate that the FPGA technology characteristics must also be considered when choosing an efficient pipeline depth. Our designs perform better compared to the existing literature due to the extended optimization effort on the synthesis tool and the efficient design of multi-block message processing.

Keywords:  Cryptography, FPGA, Hash function, Pipeline, Security (ID#: 15-3931)

URL: http://doi.acm.org/10.1145/2694805.2694808

 

Paulo Martins, Leonel Sousa; Stretching the limits of Programmable Embedded Devices for Public-key Cryptography; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages  19. Doi: 10.1145/2694805.2694809

Abstract:  In this work, the efficiency of embedded devices when operating as cryptographic accelerators is assessed, exploiting both multithreading and Single Instruction Multiple Data (SIMD) parallelism. The latency of a single modular multiplication is reduced, by splitting computation across multiple cores, and the technique is applied to the Rivest-Shamir-Adleman (RSA) cryptosystem, reducing its central operation execution time by up to 2.2 times, on an ARM A15 4-core processor. Also, algorithms are proposed to simultaneously perform multiple modular multiplications. The parallel algorithms are used to enhance the RSA and Elliptic Curve (EC) cryptosystems, obtaining speedups of upto 7.2 and 3.9 on the ARM processor, respectively. Whereas the first approach is most beneficial when a single RSA exponentiation is required, the latter provides a better performance when multiple RSA exponentiations have to be computed.

Keywords: Embedded Systems, Parallel Algorithms, Public-key Cryptography, Single Instruction Multiple Data (ID#: 15-3932)

URL: http://doi.acm.org/10.1145/2694805.2694809

 

Loïc Zussa, Ingrid Exurville, Jean-Max Dutertre, Jean-Baptiste Rigaud, Bruno Robisson, Assia Tria, Jessy Clédière; Evidence Of An Information Leakage Between Logically Independent Blocks;  CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 25. Doi: 10.1145/2694805.2694810

Abstract: In this paper we study the information leakage that may exist, due to electrical coupling, between logically independent blocks of a secure circuit as a new attack path to retrieve secret information. First, an aes-128 has been implemented on a fpga board. Then, this aes implementation has been secured with a delay-based countermeasure against fault injection related to timing constraints violations. The countermeasure's detection threshold was supposed to be logically independent from the data handled by the cryptographic algorithm. Thus, it theoretically does not leak any information related to sensitive values. However experiments point out an existing correlation between the fault detection threshold of the countermeasure and the aes's calculations. As a result, we were able to retrieve the secret key of the aes using this correlation. Finally, different strategies were tested in order to minimize the number of triggered alarm to retrieve the secret key.

Keywords:  'DPA-like' analysis, Delay-based countermeasure, information leakage, side effects (ID#: 15-3933)

URL: http://doi.acm.org/10.1145/2694805.2694810

 

Mohsen Toorani; On Continuous After-the-Fact Leakage-Resilient Key Exchange; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 31. Doi: 10.1145/2694805.2694811

Abstract: Recently, the Continuous After-the-Fact Leakage (CAFL) security model has been introduced for two-party authenticated key exchange (AKE) protocols. In the CAFL model, an adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated. It supports continuous leakage even when the adversary learns certain ephemeral secrets or session keys. The amount of leakage is limited per query, but there is no bound on the total leakage. A generic leakage-resilient key exchange protocol π has also been introduced that is formally proved to be secure in the CAFL model. In this paper, we comment on the CAFL model, and show that it does not capture its claimed security. We also present an attack and counterproofs for the security of protocol π which invalidates the formal security proofs of protocol π in the CAFL model.

Keywords: Cryptographic protocols, Key exchange, Leakage-resilient cryptography, Security models (ID#: 15-3934)

URL: http://doi.acm.org/10.1145/2694805.2694811

 

Mathieu Carbone, Yannick Teglia, Philippe Maurine, Gilles R. Ducharme; Interest Of MIA In Frequency Domain?; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 35. Doi: 10.1145/2694805.2694812

Abstract: Mutual Information Analysis (MIA) has a main advantage over Pearson's correlation Analysis (CPA): its ability in detecting any kind of leakage within traces. However, it remains rarely used and less popular than CPA; probably because of two reasons. The first one is related to the appropriate choice of hyperparameters involved in MIA, choice that determines its efficiency and genericity. The second one is surely the high computational burden associated to MIA. The interests of applying MIA in the frequency domain rather than in the time domain are discussed. It is shown that MIA running into the frequency domain is really effective and fast when combined with the use of an accurate frequency leakage model.

Keywords: (not provided) (ID#: 15-3935)

URL: http://doi.acm.org/10.1145/2694805.2694812

 

 

Apostolos P. Fournaris, Nicolaos Klaoudatos, Nicolas Sklavos, Christos Koulamas; Fault and Power Analysis Attack Resistant RNS based Edwards Curve Point Multiplication; CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 43. Doi: 10.1145/2694805.2694814

Abstract: In this paper, a road-map toward Fault (FA) and Power Analysis Attack (PA) resistance is proposed that combines the Edwards Curves innate PA resistance and a base point randomization Montgomery Power Ladder point multiplication (PM) algorithm, capable of providing broad FA and PA resistance, with the Residue number system (RNS) representation for all GF(p) operations in an effort to enhance the FA-PA resistance of point multiplication algorithms and additional provide performance efficiency in terms of speed and hardware resources. The proposed methodology security is analyzed and its efficiency is verified by designing a PM hardware architecture and FPGA implementation.

Keywords:  (not provided) (ID#: 15-3936)

URL: http://doi.acm.org/10.1145/2694805.2694814


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

International Conferences: Privacy and Security of Big Data, Shanghai, China, 2014


 
SoS Newsletter Logo

Privacy and Security of Big Data
Shanghai, China

 

The First International Workshop on Privacy and Security of Big Data was held in Shanghai, China on November 3-7, 2014 concurrently with the 2014 ACM Conference on Information and Knowledge Management.   The research work cited here was presented as part of the Big Data security workshop. 

 

Mário J. Silva, Pedro Rijo, Alexandre Francisco; Evaluating the Impact of Anonymization on Large Interaction Network Datasets: PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 3-10. Doi: 10.1145/2663715.2669610

Abstract: We address the publication of a large academic information dataset addressing privacy issues. We evaluate anonymization techniques achieving the intended protection, while retaining the utility of the anonymized data. The released data could help infer behaviors and subsequently find solutions for daily planning activities, such as cafeteria attendance, cleaning schedules or student performance, or study interaction patterns among an academic population. However, the nature of the academic data is such that many implicit social interaction networks can be derived from the anonymized datasets, raising the need for researching how anonymity can be assessed in this setting.

Keywords: academic data publishing, interaction network inference, privacy of big data, privacy-preserving data publishing   (ID#:15-3937)

URL: http://doi.acm.org/10.1145/2663715.2669610  

 

Peter Christen; Privacy Aspects in Big Data Integration: Challenges and Opportunities: PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 1-1. Doi: 10.1145/2663715.2669615

Abstract: Big Data projects often require data from several sources to be integrated before they can be used for analysis. Once data have been integrated, they allow more detailed analysis that would otherwise not be possible. Accordingly, recent years have seen an increasing interest in techniques that facilitate the integration of data from diverse sources. Whenever data about individuals, or otherwise sensitive data, are to be integrated across organizations, privacy and confidentiality have to be considered. Domains where privacy preservation during data integration is of importance include business collaborations, health research, national censuses, the social sciences, crime and fraud detection, and homeland security. Increasingly, applications in these domains require data from diverse sources (both internal and external to an organization) to be integrated.  Consequently, in the past decade, various techniques have been developed that aim to facilitate data integration without revealing any private or confidential information about the databases and records that are integrated. These techniques either provably prevent leakage of any private information, or they provide some empirical numerical measure of the risk of disclosure of private information.  In the first part of this presentation we provide a background on data integration, and illustrate the importance of preserving privacy during data integration with several application scenarios. We then give an overview of the main concepts and techniques that have been developed to facilitate data integration in such ways that no private or confidential information is being revealed. We focus on privacy-preserving record linkage (PPRL), where so far most research has been conducted. We describe the basic protocols used in PPRL, and several key technologies employed in these protocols. Finally, we discuss the challenges privacy poses to data integration in the era of Big Data, and we discuss directions and opportunities in this research area.

Keywords: data matching, multi-party, privacy techniques, privacy-preserving record linkage, scalability   (ID#:15-3938)

URL: http://doi.acm.org/10.1145/2663715.2669615

 

Kangsoo Jung, Sehwa Park, Seog Park;  Hiding A Needle In A Haystack: Privacy Preserving A Priori Algorithm In MapReduce Framework; PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 11-17. Doi: 10.1145/2663715.2669611

Abstract: In the last few years, Hadoop become a "de facto" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.

Keywords: HADOOP, assoication rule mining, privacy-preserving data mining   (ID#:15-3939)

URL: http://doi.acm.org/10.1145/2663715.2669611

 

 

Avinash Srinivasan, Jie Wu, Wen Zhu;  SAFE: Secure and Big Data-Adaptive Framework for Efficient Cross-Domain Communication; PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 19-28. Doi: 10.1145/2663715.2669612

Abstract: Today's Cross Domain Communication (CDC) infrastructure primarily consists of vendor-specific guard products that have little inter-domain coordination at runtime. Unaware of the context and the semantics of the CDC message that is being processed, the guard heavily relies on rudimentary filtering techniques. Consequently, the information domains are rendered vulnerable to an array of attacks, and countering these attacks often necessitates time-consuming human intervention to adjudicate messages in order to meet the desired security and privacy requirements of the communicating domains. Subsequently, this causes significant performance bottlenecks. In this paper, we present a set of key requirements and design principles for a service oriented CDC security infrastructure in form of a CDC Reference Architecture, featuring Domain Associated Guards (DOGs) as active work ow participants. Our proposed framework, SAFE, is secure and adaptable. SAFE also provide the foundation for the development of protocols and ontologies enabling run-time coordination among CDC elements. This enables more flexible, interoperable, and efficient CDC designs to serve mission needs, specifically among critical infrastructure domains as well as domains with significantly differing security and privacy vocabulary. To the best of our knowledge, SAFE is the first effort to employ DOG for secure CDC, unlike existing solutions with link-associated guards. Because of the DOG approach, SAFE overcomes the scalability problems encountered by exiting solutions.

Keywords: big data, cross domain communication, ontology, privacy, protocol, reference architecture, security, security guard   (ID#:15-3940)

URL: http://doi.acm.org/10.1145/2663715.2669612

 

Joanna Biega, Ida Mele, Gerhard Weikum; Probabilistic Prediction of Privacy Risks in User Search Histories; PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 29-36. Doi: 10.1145/2663715.2669609

Abstract:  This paper proposes a new model of user-centric, global, probabilistic privacy, geared for today's challenges of helping users to manage their privacy-sensitive information across a wide variety of social networks, online communities, QA forums, and search histories. Our approach anticipates an adversary that harnesses global background knowledge and rich statistics in order to make educated guesses, that is, probabilistic inferences at sensitive data. We aim for a tool that simulates such a powerful adversary, predicts privacy risks, and guides the user. In this paper, our framework is specialized for the case of Internet search histories. We present preliminary experiments that demonstrate how estimators of global correlations among sensitive and non-sensitive key-value items can be fed into a probabilistic graphical model in order to compute meaningful measures of privacy risk.

Keywords: privacy risk prediction, probabilistic privacy, query logs, user-centric privacy   (ID#:15-3941)

URL: http://doi.acm.org/10.1145/2663715.2669609

 

Suvarna Bothe, Alfredo Cuzzocrea, Panagiotis Karras, Akrivi Vlachou; Skyline Query Processing over Encrypted Data: An Attribute-Order-Preserving-Free Approach;  PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 37-43. Doi: 10.1145/2663715.2669613

Abstract: Making co-existent and convergent the need for efficiency of relational query processing over Clouds and the security of data themselves is figuring-out how one of the most challenging research problems in the Big Data era. Indeed, in actual analytics-oriented engines, such as Google Analytics and Amazon S3, where key-value storage-representation and efficient-management models are employed as to cope with the simultaneous processing of billions of transactions, querying encrypted data is becoming one of the most annoying problem, which has also attracted a great deal of attention from the research community. While this issue has been applied to a large variety of data formats, e.g. relational, RDF and multidimensional data, very few initiatives have pointed-out skyline query processing over encrypted data, which is, indeed, relevant for database analytics. In order to fulfill this methodological and technological gap, in this paper we present eSkyline, a prototype system and query interface that enables the processing of skyline queries over encrypted data, even without preserving the order on each attribute as order-preserving encryption would do. Our system comprises of an encryption scheme that facilitates the evaluation of domination relationships, hence allows for state-of-the-art skyline processing algorithms to be used. In order to prove the effectiveness and the reliability of our system, we also provide the details of the underlying encryption scheme, plus a suitable GUI that allows a user to interact with a server, and showcases the efficiency of computing skyline queries and decrypting the results.

Keywords: database security, querying encrypted data, skyline queries over encrypted data   (ID#:15-3942)

URL: http://doi.acm.org/10.1145/2663715.2669613

 

Alfredo Cuzzocrea; Privacy and Security of Big Data: Current Challenges and Future Research Perspectives; PSBD '14 Proceedings of the First International Workshop on Privacy and Security of Big Data, November 2014, Pages 45-47. Doi: 10.1145/2663715.2669614

Abstract:  Privacy and security of Big Data is gaining momentum in the research community, also due to emerging technologies like Cloud Computing, analytics engines and social networks. In response of this novel research challenge, several privacy and security of big data models, techniques and algorithms have been proposed recently, mostly adhering to algorithmic paradigms or model-oriented paradigms. Following this major trend, in this paper we provide an overview of state-of-the-art research issues and achievements in the field of privacy and security of big data, by highlighting open problems and actual research trends, and drawing novel research directions in this field.

Keywords: privacy of big data, privacy-preserving analytics over big data, secure query processing over big data, security of big data   (ID#:15-3943)

URL: http://doi.acm.org/10.1145/2663715.2669614


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Publications of Interest

SoS Logo

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Acoustic Fingerprints (2014 Year in Review)

 

 
SoS Newsletter Logo

Acoustic Fingerprints
(2014 Year in Review)

 

Acoustic fingerprints can be used to identify an audio sample or quickly locate similar items in an audio database. As a security tool, fingerprints offer a modality of biometric identification of a user. Current research is exploring various aspects and applications, including the use of these fingerprints for mobile device security, antiforensics, use of image processing techniques, and client side embedding.  The research work cited here was published in 2014. 

 

Zurek, E.E.; Gamarra, A.M.R.; Escorcia, G.J.R.; Gutierrez, C.; Bayona, H.; Perez, R.; Garcia, X., "Spectral Analysis Techniques For Acoustic Fingerprints Recognition," Image, Signal Processing and Artificial Vision (STSIVA), 2014 XIX Symposium on,  pp. 1, 5, 17-19 Sept. 2014. doi: 10.1109/STSIVA.2014.7010154 This article presents results of the recognition process of acoustic fingerprints from a noise source using spectral characteristics of the signal. Principal Components Analysis (PCA) is applied to reduce the dimensionality of extracted features and then a classifier is implemented using the method of the k-nearest neighbors (KNN) to identify the pattern of the audio signal. This classifier is compared with an Artificial Neural Network (ANN) implementation. It is necessary to implement a filtering system to the acquired signals for 60Hz noise reduction generated by imperfections in the acquisition system. The methods described in this paper were used for vessel recognition.

Keywords: acoustic noise; acoustic signal processing; audio signals; fingerprint identification; neural nets; principal component analysis; spectral analysis; ANN; PCA; acoustic fingerprints recognition; artificial neural network; audio signal; filtering system; frequency 60 Hz; k-nearest neighbors; noise reduction; noise source; principal components analysis; signal spectral characteristics; spectral analysis; vessel recognition; Acoustics; Artificial neural networks; Boats; Feature extraction; Fingerprint recognition; Finite impulse response filters; Principal component analysis; ANN; Acoustic Fingerprint; FFT; KNN; PCA; Spectrogram   (ID#: 15-3770)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7010154&isnumber=7010118

 

Moussallam, M.; Daudet, L., "A General Framework For Dictionary Based Audio Fingerprinting," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 3077, 3081, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854166 Fingerprint-based Audio recognition system must address concurrent objectives. Indeed, fingerprints must be both robust to distortions and discriminative while their dimension must remain to allow fast comparison. This paper proposes to restate these objectives as a penalized sparse representation problem. On top of this dictionary-based approach, we propose a structured sparsity model in the form of a probabilistic distribution for the sparse support. A practical suboptimal greedy algorithm is then presented and evaluated on robustness and recognition tasks. We show that some existing methods can be seen as particular cases of this algorithm and that the general framework allows to reach other points of a Pareto-like continuum.

Keywords: Pareto distribution; audio signal processing; fingerprint identification; greedy algorithms; Pareto-like continuum; concurrent objectives; dictionary based audio fingerprinting; fingerprint-based audio recognition system; general framework; penalized sparse representation problem; probabilistic distribution; sparse support; structured sparsity; suboptimal greedy algorithm; Atomic clocks; Dictionaries; Entropy; Fingerprint recognition; Robustness; Speech; Time-frequency analysis; Audio Fingerprinting; Sparse Representation  (ID#: 15-3771)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854166&isnumber=6853544

 

Hui Zeng; Tengfei Qin; Xiangui Kang; Li Liu, "Countering Anti-Forensics Of Median Filtering," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.2704,2708, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854091 The statistical fingerprints left by median filtering can be a valuable clue for image forensics. However, these fingerprints may be maliciously erased by a forger. Recently, a tricky anti-forensic method has been proposed to remove median filtering traces by restoring images' pixel difference distribution. In this paper, we analyze the traces of this anti-forensic technique and propose a novel counter method. The experimental results show that our method could reveal this anti-forensics effectively at low computation load. According to our best knowledge, it's the first work on countering anti-forensics of median filtering.

Keywords: image coding; image forensics; image restoration; median filters; statistical analysis; antiforensic method; antiforensics countering; image forensics; image pixel difference distribution restoration; median filtering traces; statistical fingerprints; Detectors; Digital images; Discrete Fourier transforms; Filtering; Forensics; Noise; Radiation detectors; Image forensics; anti-forensic; median filtering; pixel difference  (ID#: 15-3772)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854091&isnumber=6853544

 

Naini, R.; Moulin, P., "Fingerprint Information Maximization For Content Identification," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 3809, 3813, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854314 This paper presents a novel design of content fingerprints based on maximization of the mutual information across the distortion channel. We use the information bottleneck method to optimize the filters and quantizers that generate these fingerprints. A greedy optimization scheme is used to select filters from a dictionary and allocate fingerprint bits. We test the performance of this method for audio fingerprinting and show substantial improvements over existing learning based fingerprints.

Keywords: information retrieval; optimisation; content fingerprint; content identification; distortion channel; filter optimization ;fingerprint information maximization; greedy optimization; information bottleneck method; learning based fingerprint; mutual information across; quantizer optimization; Approximation methods; Databases; Dictionaries; Joints; Mutual information; Optimization; Quantization (signal);Audio fingerprinting; Content Identification; Information bottleneck; Information maximization  (ID#: 15-3773)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854314&isnumber=6853544

 

Yuxi Liu; Hatzinakos, D., "Human Acoustic Fingerprints: A Novel Biometric Modality For Mobile Security," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 784, 3788, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854309 Recently, the demand for more robust protection against unauthorized use of mobile devices has been rapidly growing. This paper presents a novel biometric modality Transient Evoked Otoacoustic Emission (TEOAE) for mobile security. Prior works have investigated TEOAE for biometrics in a setting where an individual is to be identified among a pre-enrolled identity gallery. However, this limits the applicability to mobile environment, where attacks in most cases are from imposters unknown to the system before. Therefore, we employ an unsupervised learning approach based on Autoencoder Neural Network to tackle such blind recognition problem. The learning model is trained upon a generic dataset and used to verify an individual in a random population. We also introduce the framework of mobile biometric system considering practical application. Experiments show the merits of the proposed method and system performance is further evaluated by cross-validation with an average EER 2.41% achieved.

Keywords: acoustic signal processing; biometrics (access control); learning (artificial intelligence); mobile computing; mobile handsets; neural nets; otoacoustic emissions; autoencoder neural network; biometric modality; blind recognition problem; generic dataset; human acoustic fingerprints; learning model; mobile biometric system; mobile devices; mobile environment; mobile security; pre-enrolled identity gallery; transient evoked otoacoustic emission; unsupervised learning approach; Biometrics (access control);Feature extraction; Mobile communication; Neural networks; Security; Time-frequency analysis; Training; Autoencoder Neural Network; Biometric Verification; Mobile Security; Otoacoustic Emission; Time-frequency Analysis  (ID#: 15-3774)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854309&isnumber=6853544

 

Alias T, E.; Naveen, N.; Mathew, D., "A Novel Acoustic Fingerprint Method for Audio Signal Pattern Detection," Advances in Computing and Communications (ICACC), 2014 Fourth International Conference on, pp. 64, 68, 27-29 Aug. 2014. doi: 10.1109/ICACC.2014.21 This paper presents a novel and efficient audio signal recognition algorithm with limited computational complexity. As the audio recognition system will be used in real world environment where background noises are high, conventional speech recognition techniques are not directly applicable, since they have a poor performance in these environments. So here, we introduce a new audio recognition algorithm which is optimized for mechanical sounds such as car horn, telephone ring etc. This is a hybrid time-frequency approach which makes use of acoustic fingerprint for the recognition of audio signal patterns. The limited computational complexity is achieved through efficient usage of both time domain and frequency domain in two different processing phases, detection and recognition respectively. And the transition between these two phases is carried out through a finite state machine(FSM)model. Simulation results shows that the algorithm effectively recognizes audio signals within a noisy environment.

Keywords: acoustic noise; acoustic signal detection; acoustic signal processing; audio signal processing; computational complexity; finite state machines; pattern recognition ;time-frequency analysis; FSM model; acoustic fingerprint method; audio signal pattern detection; background noises; computational complexity; efficient audio signal recognition algorithm; finite state machine; hybrid time-frequency approach; mechanical sounds; speech recognition techniques; Acoustics; Computational complexity; Correlation; Frequency-domain analysis; Noise measurement; Pattern recognition; Time-domain analysis; Acoustic fingerprint; Audio detection; Audio recognition; Finite State Machine(FSM);Pitch frequency; Spectral signature; Time-Frequency processing   (ID#: 15-3775)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6905990&isnumber=6905967

 

Ghatak, S.; Lodh, A.; Saha, E.; Goyal, A.; Das, A.; Dutta, S., "Development of a Keyboardless Social Networking Website For Visually Impaired: SocialWeb," Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS), 2014 IEEE, pp.232,236, 26-27 Sept. 2014. doi: 10.1109/GHTC-SAS.2014.6967589 Over the past decade, we have witnessed a huge upsurge in social networking which continues to touch and transform our lives till present day. Social networks help us to communicate amongst our acquaintances and friends with whom we share similar interests on a common platform. Globally, there are more than 200 million visually impaired people. Visual impairment has many issues associated with it, but the one that stands out is the lack of accessibility to content for entertainment and socializing safely. This paper deals with the development of a keyboard less social networking website for visually impaired. The term keyboard less signifies minimum use of keyboard and allows the user to explore the contents of the website using assistive technologies like screen readers and speech to text (STT) conversion technologies which in turn provides a user friendly experience for the target audience. As soon as the user with minimal computer proficiency opens this website, with the help of screen reader, he/she identifies the username and password fields. The user speaks out his username and with the help of STT conversion (using Web Speech API), the username is entered. Then the control moves over to the password field and similarly, the password of the user is obtained and matched with the one saved in the website database. The concept of acoustic fingerprinting has been implemented for successfully validating the passwords of registered users and foiling intentions of malicious attackers. On successful match of the passwords, the user is able to enjoy the services of the website without any further hassle. Once the access obstacles associated to deal with social networking sites are successfully resolved and proper technologies are put to place, social networking sites can be a rewarding, fulfilling, and enjoyable experience for the visually impaired people.

Keywords: handicapped aids; human computer interaction; message authentication; social networking (online);STT conversion; SocialWeb; acoustic fingerprinting; assistive technologies; computer proficiency; keyboardless social networking Website; malicious attackers; screen readers; speech to text conversion technologies; user friendliness; visually impaired people; Communities; Computers; Fingerprint recognition; Media; Social network services; Speech; Speech recognition; Assitive technologies; STT conversion; Web Speech API; audio fingerprinting; screen reader  (ID#: 15-3776)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6967589&isnumber=6967546

 

Severin, F.; Baradarani, A.; Taylor, J.; Zhelnakov, S.; Maev, R., "Auto-Adjustment Of Image Produced By Multi-Transducer Ultrasonic System," Ultrasonics Symposium (IUS), 2014 IEEE International, pp.1944,1947, 3-6 Sept. 2014. doi: 10.1109/ULTSYM.2014.0483 Acoustic microscopy is characterized by relatively long scanning time, which is required for the motion of the transducer over the entire scanning area. This time may be reduced by using a multi-channel acoustical system which has several identical transducers arranged as an array and is mounted on a mechanical scanner so that each transducer scans only a fraction of the total area. The resulting image is formed as a combination of all acquired partial data sets. The mechanical instability of the scanner, as well as the difference in parameters of the individual transducers causes a misalignment of the image fractures. This distortion may be partially compensated for by the introduction of constant or dynamical signal leveling and data shift procedures. However, a reduction of the random instability component requires more advanced algorithms, including auto-adjustment of processing parameters. The described procedure was implemented into the prototype of an ultrasonic fingerprint reading system. The specialized cylindrical scanner provides a helical spiral lens trajectory which eliminates repeatable acceleration, reduces vibration and allows constant data flow on maximal rate. It is equipped with an array of four spherically focused 50 MHz acoustic lenses operating in pulse-echo mode. Each transducer is connected to a separate channel including pulser, receiver and digitizer. The output 3D data volume contains interlaced B-scans coming from each channel. Afterward, data processing includes pre-determined procedures of constant layer shift in order to compensate for the transducer displacement, phase shift and amplitude leveling for compensation of variation in transducer characteristics. Analysis of statistical parameters of individual scans allows adaptive eliminating of the axial misalignment and mechanical vibrations. Further 2D correlation of overlapping partial C-scans will realize an interpolative adjustment which essentially improves the output image. Imple- entation of this adaptive algorithm into a data processing sequence allows us to significantly reduce misreading due to hardware noise and finger motion during scanning. The system provides a high quality acoustic image of the fingerprint including different levels of information: fingerprint pattern, sweat porous locations, internal dermis structures. These additional features can effectively facilitate fingerprint based identification. The developed principles and algorithm implementations allow improved quality, stability and reliability of acoustical data obtained with the mechanical scanner, accommodating several transducers. General principles developed during this work can be applied to other configurations of advanced ultrasonic systems designed for various biomedical and NDE applications. The data processing algorithm, developed for a specific biometric task, can be adapted for the compensation of mechanical imperfections of the other devices.

Keywords: acoustic devices; acoustic microscopy; fingerprint identification; image processing; ultrasonic imaging; ultrasonic transducer arrays; acoustic lenses; acoustic microscopy; amplitude leveling; arrayed transducers; biometric task; cylindrical scanner; data processing sequence; data shift procedures; digitizer; dynamical signal leveling; frequency 50 MHz; helical spiral lens trajectory; high quality acoustic image; image autoadjustment; image fracture; multichannel acoustical system; multitransducer ultrasonic system; phase shift; pulse-echo mode operation; pulser; receiver; scanner mechanical instability; transducer displacement; ultrasonic fingerprint reading system; Acoustic distortion; Acoustics; Arrays; Fingerprint recognition; Lenses; Skin; Transducers; Acoustical microscopy; array transducer; image processing  (ID#: 15-3777)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932356&isnumber=6931723

 

Chitnis, P.V.; Lloyd, H.; Silverman, R.H., "An Adaptive Interferometric Sensor For All-Optical Photoacoustic Microscopy," Ultrasonics Symposium (IUS), 2014 IEEE International, pp.353,356, 3-6 Sept. 2014. doi: 10.1109/ULTSYM.2014.0087 Conventional photoacoustic microscopy (PAM) involves detection of optically induced thermo-elastic waves using ultrasound transducers. This approach requires acoustic coupling and the spatial resolution is limited by the focusing properties of the transducer. We present an all-optical PAM approach that involved detection of the photoacoustically induced surface displacements using an adaptive, two-wave mixing interferometer. The interferometer consisted of a 532-nm, CW laser and a Bismuth Silicon Oxide photorefractive crystal (PRC) that was 5×5×5 mm3. The laser beam was expanded to 3 mm and split into two paths, a reference beam that passed directly through the PRC and a signal beam that was focused at the surface through a 100-X, infinity-corrected objective and returned to the PRC. The PRC matched the wave front of the reference beam to that of the signal beam for optimal interference. The interference of the two beams produced optical-intensity modulations that were correlated with surface displacements. A GHz-bandwidth photoreceiver, a low-noise 20-dB amplifier, and a 12-bit digitizer were employed for time-resolved detection of the surface-displacement signals. In combination with a 5-ns, 532-nm pump laser, the interferometric probe was employed for imaging ink patterns, such as a fingerprint, on a glass slide. The signal beam was focused at a reflective cover slip that was separated from the fingerprint by 5 mm of acoustic-coupling gel. A 3×5 mm2 area of the coverslip was raster scanned with 100-μm steps and surface-displacement signals at each location were averaged 20 times. Image reconstruction based on time reversal of the PA-induced displacement signals produced the photoacoustic image of the ink patterns. The reconstructed image of the fingerprint was consistent with its photograph, which demonstrated the ability of our system to resolve micron-scaled features at a depth of 5 mm.

 Keywords: acoustic microscopy; acoustic signal detection; acoustic wave interferometry; analogue-digital conversion; biological techniques; biological tissues; bismuth compounds; image reconstruction; light interferometers; low noise amplifiers; multiwave mixing; optical microscopy; optical pumping; optical receivers; photoacoustic effect; photorefractive materials ;thermoelasticity; ultrasonic focusing; ultrasonic transducers;BiSiO2;CW laser; acoustic coupling; acoustic-coupling gel; adaptive interferometric microscopy; adaptive interferometric sensor; bismuth silicon oxide photorefractive crystals; focusing properties; glass slide ;image reconstruction; imaging ink patterns; laser beam; low-noise amplifier; noise figure 20 dB; optical PAM approach; optical photoacoustic microscopy; optical-intensity modulation; optically induced thermo-elastic wave detection; optimal interference; photoacoustic image; photoacoustically induced surface displacement detection; photoreceiver; reconstructed image; reflective cover slip; surface amplifier; surface-displacement signals; time-resolved detection; two-wave mixing interferometer; ultrasound transducers; wavelength 532 nm; Acoustic beams; Acoustics; Imaging; Laser beams; Laser excitation; Optical interferometry; Optical surface waves  (ID#: 15-3778)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932143&isnumber=6931723

 

Shakeri, S.; Leus, G., "Underwater Ultra-Wideband Fingerprinting-Based Sparse Localization," Signal Processing Advances in Wireless Communications (SPAWC), 2014 IEEE 15th International Workshop on, pp.140,144, 22-25 June 2014. doi: 10.1109/SPAWC.2014.6941333 In this work, a new fingerprinting-based localization algorithm is proposed for an underwater medium by utilizing ultra-wideband (UWB) signals. In many conventional underwater systems, localization is accomplished by utilizing acoustic waves. On the other hand, electromagnetic waves haven't been employed for underwater localization due to the high attenuation of the signal in water. However, it is possible to use UWB signals for short-range underwater localization. In this work, the feasibility of performing localization for an underwater medium is illustrated by utilizing a fingerprinting-based localization approach. By employing the concept of compressive sampling, we propose a sparsity-based localization method for which we define a system model exploiting the spatial sparsity.

Keywords: compressed sensing; ultra wideband communication; underwater acoustic communication; underwater acoustic propagation; UWB signal utilization; acoustic wave utilization; compressive sampling; grid matching; sparsity-based localization method; ultrawideband signal utilization; underwater ultrawideband fingerprinting-based sparse localization; Accuracy; Dictionaries; Indexes; Receiving antennas; Signal processing algorithms; Synchronization; Vectors; fingerprinting localization; grid matching; sparse recovery; underwater   (ID#: 15-3779)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6941333&isnumber=6941301

 

Rafii, Z.; Coover, B.; Jinyu Han, "An Audio Fingerprinting System For Live Version Identification Using Image Processing Techniques," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.644,648, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853675 Suppose that you are at a music festival checking on an artist, and you would like to quickly know about the song that is being played (e.g., title, lyrics, album, etc.). If you have a smartphone, you could record a sample of the live performance and compare it against a database of existing recordings from the artist. Services such as Shazam or SoundHound will not work here, as this is not the typical framework for audio fingerprinting or query-by-humming systems, as a live performance is neither identical to its studio version (e.g., variations in instrumentation, key, tempo, etc.) nor it is a hummed or sung melody. We propose an audio fingerprinting system that can deal with live version identification by using image processing techniques. Compact fingerprints are derived using a log-frequency spectrogram and an adaptive thresholding method, and template matching is performed using the Hamming similarity and the Hough Transform.

Keywords: Hough transforms; audio signal processing; fingerprint identification; image segmentation; Hamming similarity; Hough Transform; adaptive thresholding method; audio fingerprinting system; compact fingerprints; image processing techniques; live version identification; log-frequency spectrogram; music festival; smartphone; template matching; Degradation; Robustness; Spectrogram; Speech; Speech processing; Time-frequency analysis; Transforms; Adaptive thresholding; Constant Q Transform; audio fingerprinting; cover identification  (ID#: 15-3780)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853675&isnumber=6853544

 

Jun-Yong Lee; Hyoung-Gook Kim, "Audio Fingerprinting To Identify TV Commercial Advertisement In Real-Noisy Environment," Communications and Information Technologies (ISCIT), 2014 14th International Symposium on,  pp. 527, 530, 24-26 Sept. 2014. doi: 10.1109/ISCIT.2014.7011969 This paper proposes a high-performance audio fingerprint extraction method for identifying TV commercial advertisement. In the proposed method, a salient audio peak pair fingerprints based on constant Q transform (CQT) are hashed and stored, to be efficiently compared to one another. Experimental results confirm that the proposed method is quite robust in different noise conditions and improves the accuracy of the audio fingerprinting system in real noisy environments.

Keywords: acoustic noise; audio signal processing; feature extraction; television broadcasting; transforms; CQT; TV commercial advertisement identification; audio fingerprinting extraction method; constant Q transform; real-noisy environment; salient audio peak pair fingerprints; Databases; Fingerprint recognition; Noise; Robustness; Servers; TV; Time-frequency analysis; audio content identification; audio fingerprinting; constant Q transform  (ID#: 15-3780)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011969&isnumber=7011852

 

Hui Su; Hajj-Ahmad, A.; Min Wu; Oard, D.W., "Exploring the Use Of ENF For Multimedia Synchronization," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 4613, 4617, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854476 The electric network frequency (ENF) signal can be captured in multimedia recordings due to electromagnetic influences from the power grid at the time of recording. Recent work has exploited the ENF signals for forensic applications, such as authenticating and detecting forgery of ENF-containing multimedia signals, and inferring their time and location of creation. In this paper, we explore a new potential of ENF signals for automatic synchronization of audio and video. The ENF signal as a time-varying random process can be used as a timing fingerprint of multimedia signals. Synchronization of audio and video recordings can be achieved by aligning their embedded ENF signals. We demonstrate the proposed scheme with two applications: multi-view video synchronization and synchronization of historical audio recordings. The experimental results show the ENF based synchronization approach is effective, and has the potential to solve problems that are intractable by other existing methods.

Keywords: audio recording; electromagnetic interference; random processes; synchronisation; video recording; ENF signal; electric network frequency signal; forensic applications; historical audio recording automatic synchronization; multimedia recordings; multimedia signal timing fingerprint; multiview video recording automatic synchronization; power grid; time-varying random process; Audio recording; Forensics; Frequency estimation; Multimedia communication; Streaming media; Synchronization; Video recording; ENF; audio; historical recordings; synchronization; video  (ID#: 15-3781)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854476&isnumber=6853544

 

Hongbo Liu; Jie Yang; Sidhom, S.; Yan Wang; Yingying Chen; Fan Ye, "Accurate WiFi Based Localization for Smartphones Using Peer Assistance," Mobile Computing, IEEE Transactions on, vol.13, no.10, pp. 2199, 2214, Oct. 2014. doi: 10.1109/TMC.2013.140 Highly accurate indoor localization of smartphones is critical to enable novel location based features for users and businesses. In this paper, we first conduct an empirical investigation of the suitability of WiFi localization for this purpose. We find that although reasonable accuracy can be achieved, significant errors (e.g., 6 8m) always exist. The root cause is the existence of distinct locations with similar signatures, which is a fundamental limit of pure WiFi-based methods. Inspired by high densities of smartphones in public spaces, we propose a peer assisted localization approach to eliminate such large errors. It obtains accurate acoustic ranging estimates among peer phones, then maps their locations jointly against WiFi signature map subjecting to ranging constraints. We devise techniques for fast acoustic ranging among multiple phones and build a prototype. Experiments show that it can reduce the maximum and 80-percentile errors to as small as 2m and 1m, in time no longer than the original WiFi scanning, with negligible impact on battery lifetime.

Keywords: indoor radio; radionavigation; smart phones; wireless LAN; WiFi based localization method; WiFi signature map; acoustic ranging estimates; battery lifetime; indoor localization; location based features; peer assisted localization approach; peer phones; smart phones; Accuracy; Acoustics; Distance measurement; IEEE 802.11 Standards; Servers; Smart phones; Testing; Peer Assisted Localization; Smartphone; WiFi fingerprint localization; peer assisted localization  (ID#: 15-3782)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6648328&isnumber=6887382

 

Lan Zhang; Kebin Liu; Yonghang Jiang; Xiang-Yang Li; Yunhao Liu; Panlong Yang, "Montage: Combine Frames With Movement Continuity For Realtime Multi-User Tracking," INFOCOM, 2014 Proceedings IEEE, pp.799,807, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848007 In this work we design and develop Montage for real-time multi-user formation tracking and localization by off-the-shelf smartphones. Montage achieves submeter-level tracking accuracy by integrating temporal and spatial constraints from user movement vector estimation and distance measuring. In Montage we designed a suite of novel techniques to surmount a variety of challenges in real-time tracking, without infrastructure and fingerprints, and without any a priori user-specific (e.g., stride-length and phone-placement) or site-specific (e.g., digitalized map) knowledge. We implemented, deployed and evaluated Montage in both outdoor and indoor environment. Our experimental results (847 traces from 15 users) show that the stride-length estimated by Montage over all users has error within 9cm, and the moving-direction estimated by Montage is within 20°. For realtime tracking, Montage provides meter-second-level formation tracking accuracy with off-the-shelf mobile phones.

Keywords: smart phones; target tracking; meter-second-level formation tracking accuracy; mobile phones; movement continuity; moving-direction estimation; real-time multiuser formation tracking; smartphones; spatial constraints; submeter-level tracking; temporal constraints; user movement vector estimation; Acceleration; Acoustics; Distance measurement; Earth; Topology; Tracking; Vectors  (ID#: 15-3783)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848007&isnumber=6847911

 

Kaghaz-Garan, S.; Umbarkar, A.; Doboli, A., "Joint Localization And Fingerprinting Of Sound Sources For Auditory Scene Analysis," Robotic and Sensors Environments (ROSE), 2014 IEEE International Symposium on, pp.49,54, 16-18 Oct. 2014. doi: 10.1109/ROSE.2014.6952982 In the field of scene understanding, researchers have mainly focused on using video/images to extract different elements in a scene. The computational as well as monetary cost associated with such implementations is high. This paper proposes a low-cost system which uses sound-based techniques in order to jointly perform localization as well as fingerprinting of the sound sources. A network of embedded nodes is used to sense the sound inputs. Phase-based sound localization and Support-Vector Machine classification are used to locate and classify elements of the scene, respectively. The fusion of all this data presents a complete “picture” of the scene. The proposed concepts are applied to a vehicular-traffic case study. Experiments show that the system has a fingerprinting accuracy of up to 97.5%, localization error less than 4 degrees and scene prediction accuracy of 100%.

Keywords: acoustic signal processing; pattern classification; sensor fusion; support vector machines; traffic engineering computing; auditory scene analysis; data fusion; embedded nodes; phase-based sound localization; scene element classification; sound source fingerprinting; sound source localization; sound-based techniques; support-vector machine classification; vehicular-traffic case study; Accuracy; Feature extraction; Image analysis; Sensors; Support vector machines; Testing; Vehicles  (ID#: 15-3784)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6952982&isnumber=6952949

 

Luque, J.; Anguera, X., "On the Modeling Of Natural Vocal Emotion Expressions Through Binary Key," Signal Processing Conference (EUSIPCO), 2014 Proceedings of the 22nd European, pp. 1562, 1566, 1-5 Sept. 2014. This work presents a novel method to estimate natural expressed emotions in speech through binary acoustic modeling. Standard acoustic features are mapped to a binary value representation and a support vector regression model is used to correlate them with the three-continuous emotional dimensions. Three different sets of speech features, two based on spectral parameters and one on prosody are compared on the VAM corpus, a set of spontaneous dialogues from a German TV talk-show. The regression analysis, in terms of correlation coefficient and mean absolute error, show that the binary key modeling is able to successfully capture speaker emotion characteristics. The proposed algorithm obtains comparable results to those reported on the literature while it relies on a much smaller set of acoustic descriptors. Furthermore, we also report on preliminary results based on the combination of the binary models, which brings further performance improvements.

Keywords: acoustic signal processing; emotion recognition; regression analysis; speech recognition; support vector machines; German TV talk-show; VAM corpus; acoustic descriptors; binary acoustic modeling; binary key modeling; binary value representation; correlation coefficient; mean absolute error; natural vocal emotion expression modelling; speaker emotion characteristics; spectral parameters; speech features; spontaneous dialogues; standard acoustic feature mapping; support vector regression model; three-continuous emotional dimensions; Acoustics; Emotion recognition; Feature extraction; Speech; Speech recognition; Training; Vectors; Emotion modeling; VAM corpus; binary fingerprint; dimensional emotions  (ID#: 15-3785)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6952552&isnumber=6951911

 

Van Vaerenbergh, S.; González, O.; Vía, J.; Santamaría, I., "Physical Layer Authentication Based On Channel Response Tracking Using Gaussian Processes," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 2410, 2414, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854032 Physical-layer authentication techniques exploit the unique properties of the wireless medium to enhance traditional higher-level authentication procedures. We propose to reduce the higher-level authentication overhead by using a state-of-the-art multi-target tracking technique based on Gaussian processes. The proposed technique has the additional advantage that it is capable of automatically learning the dynamics of the trusted user's channel response and the time-frequency fingerprint of intruders. Numerical simulations show very low intrusion rates, and an experimental validation using a wireless test bed with programmable radios demonstrates the technique's effectiveness.

Keywords: Gaussian processes; fingerprint identification; security of data; target tracking; telecommunication security; time-frequency analysis; wireless channels; Gaussian process; automatic learning; channel response tracking; higher level authentication overhead; higher level authentication procedure; intruder; multitarget tracking technique; numerical simulation; physical layer authentication; programmable radio; time-frequency fingerprint; trusted user channel response; wireless medium; wireless test bed; Authentication; Channel estimation; Communication system security; Gaussian processes; Time-frequency analysis; Trajectory; Wireless communication; Gaussian processes; multi-target tracking; physical-layer authentication; wireless communications  (ID#: 15-3786)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854032&isnumber=6853544

 

Bianchi, T.; Piva, A., "TTP-free Asymmetric Fingerprinting Protocol Based On Client Side Embedding," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.3987,3991, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854350 In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution.

Keywords: client-server systems; cryptographic protocols; image coding; image watermarking; multimedia systems; trusted computing; Buyer's fingerprint; TTP-free asymmetric fingerprinting protocol; asymmetric protocols; binary fingerprint; client-side embedding distribution framework; client-side embedding technique; customer rights; multimedia content distribution; nonblind decoding; personalized decryption key distribution; scalability issues; trusted third party; Decoding; Encryption; Protocols; Servers; Table lookup; Watermarking; Buyer-Seller watermarking protocol; Client-side embedding; Fingerprinting; secure watermark embedding  (ID#: 15-3787)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854350&isnumber=6853544

 

Coover, B.; Jinyu Han, "A Power Mask Based Audio Fingerprint," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 1394, 1398, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853826 The Philips audio fingerprint[1] has been used for years, but its robustness against external noise has not been studied accurately. This paper shows the Philips fingerprint is noise resistant, and is capable of recognizing music that is corrupted by noise at a -4 to -7 dB signal to noise ratio. In addition, the drawbacks of the Philips fingerprint are addressed by utilizing a “Power Mask” in conjunction with the Philips fingerprint during the matching process. This Power Mask is a weight matrix given to the fingerprint bits, which allows mismatched bits to be penalized according to their relevance in the fingerprint. The effectiveness of the proposed fingerprint was evaluated by experiments using a database of 1030 songs and 1184 query files that were heavily corrupted by two types of noise at varying levels. Our experiments show the proposed method has significantly improved the noise resistance of the standard Philips fingerprint.

Keywords: audio signal processing; music; Power Mask; audio fingerprint; fingerprint bits; music; noise resistance; weight matrix;1f noise; Bit error rate; Databases; Resistance; Robustness; Signal to noise ratio; Audio Fingerprint; Music Recognition  (ID#: 15-3788)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853826&isnumber=6853544


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Android and iOS Encryption

SoS Newsletter Logo

Android and iOS Encryption

Mobile telephone operating systems present interesting security challenges. The research cited here addresses encryption solutions for the two most currently popular systems, iOS and Android. Of the articles here, only three address iOS; the rest address problems with Android, perhaps due to its easier access since it is open source-based. One article also includes QNX, the Blackberry operating system. The work presented here was published in 2014.

Teufl, P.; Fitzek, A.; Hein, D.; Marsalek, A.; Oprisnik, A.; Zefferer, T., "Android Encryption Systems," Privacy and Security in Mobile Systems (PRISMS), 2014 International Conference on, pp. 1, 8, 11-14 May 2014. doi: 10.1109/PRISMS.2014.6970599 The high usability of smartphones and tablets is embraced by consumers as well as the corporate and public sector. However, especially in the non-consumer area the factor security plays a decisive role for the platform-selection process. All of the current companies within the mobile device sector added a wide range of security features to the initially consumer-oriented devices (Apple, Google, Microsoft), or have dealt with security as a core feature from the beginning (RIM, now Blackberry). One of the key security features for protecting data on the device or in device backups are encryption systems, which are available in the majority of current devices. However, even under the assumption that the systems are implemented correctly, there is a wide range of parameters, specific use cases, and weaknesses that need to be considered when deploying mobile devices in security-critical environments. As the second part in a series of papers (the first part was on iOS), this work analyzes the deployment of the Android platform and the usage of its encryption systems within a security-critical context. For this purpose, Android's different encryption systems are assessed and their susceptibility to different attacks is analyzed in detail. Based on these results a workflow is presented, which supports deployment of the Android platform and usage of its encryption systems within security-critical application scenarios.

Keywords: Android (operating system); cryptography; data protection; smart phones; Android encryption systems; Android platform deployment analysis; Apple; Blackerry; Google; Microsoft; RIM; attack susceptibility; consumer-oriented devices; data protection; device backups; IOS; mobile device sector; mobile devices; nonconsumer area; platform-selection process; security features; security-critical application scenarios; security-critical context; security-critical environments; smart phones; tablets; Androids; Encryption; Humanoid robots; Smart phones (ID#:15-3733)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970599&isnumber=6970591

Verma, S.; Pal, S.K.; Muttoo, S.K., "A New Tool For Lightweight Encryption On Android," Advance Computing Conference (IACC), 2014 IEEE International, pp. 306, 311, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779339 Theft or loss of a mobile device could be an information security risk as it can result in loss of con fidential personal data. Traditional cryptographic algorithms are not suitable for resource constrained and handheld devices. In this paper, we have developed an efficient and user friendly tool called "NCRYPT" on Android platform. "NCRYPT" application is used to secure the data at rest on Android thus making it inaccessible to unauthorized users. It is based on lightweight encryption scheme i.e. Hummingbird-2. The application provides secure storage by making use of password based authentication so that an adversary cannot access the confidential data stored on the mobile device. The cryptographic key is derived through the password based key generation method PBKDF2 from the standard SUN JCE cryptographic provider. Various tools for encryption are available in the market which are based on AES or DES encryption schemes. The reported tool is based on Hummingbird-2 and is faster than most of the other existing schemes. It is also resistant to most of attacks applicable to Block and Stream Ciphers. Hummingbird-2 has been coded in C language and embedded in Android platform with the help of JNI (Java Native Interface) for faster execution. This application provides choices for encrypting the entire data on SD card or selective files on the smart phone and protect personal or confidential information available in such devices.

Keywords: C language; cryptography; smart phones; AES encryption scheme; Android platform; C language; DES encryption scheme; Hummingbird-2 scheme; JNI; Java native interface; NCRYPT application; PBKDF2 password based key generation method; SUN JCE cryptographic provider; block ciphers; confidential data; cryptographic algorithms; cryptographic key; information security risk; lightweight encryption scheme; mobile device; password based authentication; stream ciphers; Ciphers; Encryption; Smart phones; Standards; Throughput; Android; HummingBird2; Information Security; Lightweight Encryption;PBKDF2 (ID#:15-3734)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779339&isnumber=6779283

Patil, M.; Sahu, V.; Jain, A., "SMS text Compression and Encryption on Android O.S," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1 ,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921767 Today in the world of globalization mobile communication is one of the fastest growing medium though which one sender can interact with other in short time. During the transmission of data from sender to receiver, size of data is important, since more data takes more time. But one of the limitations of sending data through mobile devices is limited use of bandwidth and number of packets transmitted. Also the security of these data is important. Hence various protocols are implemented which not only provides security to the data but also utilizes bandwidth. Here we proposed an efficient technique of sending SMS text using combination of compression and encryption. The data to be send is first encrypted using Elliptic curve Cryptographic technique, but encryption increases the size of the text data, hence compression is applied to this encrypted data so the data gets compressed and is send in short time. The Compression technique implemented here is an efficient one since it includes an algorithm which compresses the text by 99.9%, hence a great amount of bandwidth gets saved. The hybrid technique of Compression-Encryption of SMS text message is implemented for Android Operating Systems.

Keywords: Android (operating system);cryptographic protocols; data communication; data compression; electronic messaging; public key cryptography; smart phones; Android OS; SMS text encryption-compression technique; data security; data transmission; elliptic curve cryptographic technique; mobile communication; mobile devices; security protocols; Algorithm design and analysis; Bandwidth; Computers; Encryption; Mobile communication; Mobile handsets; ECDSA; Look ahead buffer; PDA; SMS; lossless compression (ID#:15-3735)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921767&isnumber=6921705

Ma Licui; Li Meihong; Li Lun; Du Ye; Zhang Dawei, "A SDKEY-Based Secure Storage and Transmission Approach for Android Phone," Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2014 International Conference on, pp.1,6, 13-15 Oct. 2014. doi: 10.1109/CyberC.2014.10 To resolve the more and more serious problems of sensitive data leakage from Android systems, a kind of method of data protection on encryption storage and encryption transmission is presented in this paper by adopting secure computation environment of SDKEY device. Firstly, a dual-authentication scheme for login using SDKEY and PIN is designed. It is used for login on system boot and lock screen. Secondly, an approach on SDKEY-based transparent encryption storage for different kinds of data files is presented, and a more fine-grained encryption scheme for different file types is proposed. Finally, a method of encryption transmission between Android phones is presented, and two kinds of key exchange mechanisms are designed for next encryption and decryption operation in the following. One is a zero-key exchange and another is a public key exchange. In this paper, a prototype system based on the above solution has been developed, and its security and performance are both analyzed and verified from several aspects.

Keywords: Android (operating system); message authentication; public key cryptography; storage management; Android phones; Android system; PIN; SDKEY device; SDKEY-based secure storage; SDKEY-based transparent encryption storage; data files; data protection; decryption operation ;dual-authentication scheme; encryption operation; encryption transmission; fine-grained encryption scheme; key exchange mechanisms; lock screen; prototype system; public key exchange; secure computation environment; sensitive data leakage; system boot; transmission approach; zero-key exchange; Authentication; Ciphers; Encryption; Receivers; Smart phones; Authentication; Encryption Storage; Encryption Transmission; Key exchange; SDKEY (ID#:15-3736)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984271&isnumber=6984259

Skillen, A.; Mannan, M., "Mobiflage: Deniable Storage Encryption for Mobile Devices," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no. 3, pp. 224, 237, May-June 2014. doi: 10.1109/TDSC.2013.56 Data confidentiality can be effectively preserved through encryption. In certain situations, this is inadequate, as users may be coerced into disclosing their decryption keys. Steganographic techniques and deniable encryption algorithms have been devised to hide the very existence of encrypted data. We examine the feasibility and efficacy of deniable encryption for mobile devices. To address obstacles that can compromise plausibly deniable encryption (PDE) in a mobile environment, we design a system called Mobiflage. Mobiflage enables PDE on mobile devices by hiding encrypted volumes within random data in a devices free storage space. We leverage lessons learned from deniable encryption in the desktop environment, and design new countermeasures for threats specific to mobile systems. We provide two implementations for the Android OS, to assess the feasibility and performance of Mobiflage on different hardware profiles. MF-SD is designed for use on devices with FAT32 removable SD cards. Our MF-MTP variant supports devices that instead share a single internal partition for both apps and user accessible data. MF-MTP leverages certain Ext4 file system mechanisms and uses an adjusted data-block allocator. These new techniques for soring hidden volumes in Ext4 file systems can also be applied to other file systems to enable deniable encryption for desktop OSes and other mobile platforms.

Keywords: Android (operating system);cryptography; mobile computing; steganography; Android OS;Ext4 file system mechanisms;FAT32 removable SD cards; MF-MTP variant; MF-SD; Mobiflage; PDE; data confidentiality; data-block allocator; decryption keys; deniable storage encryption; desktop OS; desktop environment; mobile devices; mobile environment; plausibly deniable encryption; steganographic techniques; Androids; Encryption; Humanoid robots; Law; Mobile communication; Mobile handsets; File system security; deniable encryption; mobile platform security; storage encryption (ID#:15-3737)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682886&isnumber=6813632

Hilgers, C.; Macht, H.; Muller, T.; Spreitzenbarth, M., "Post-Mortem Memory Analysis of Cold-Booted Android Devices," IT Security Incident Management & IT Forensics (IMF), 2014 Eighth International Conference on, pp.62,75, 12-14 May 2014. doi: 10.1109/IMF.2014.8 As recently shown in 2013, Android-driven smartphones and tablet PCs are vulnerable to so-called cold boot attacks. With physical access to an Android device, forensic memory dumps can be acquired with tools like FROST that exploit the remanence effect of DRAM to read out what is left in memory after a short reboot. While FROST can in some configurations be deployed to break full disk encryption, encrypted user partitions are usually wiped during a cold boot attack, such that a post-mortem analysis of main memory remains the only source of digital evidence. Therefore, we provide an in-depth analysis of Android's memory structures for system and application level memory. To leverage FROST in the digital investigation process of Android cases, we provide open-source Volatility plugins to support an automated analysis and extraction of selected Dalvik VM memory structures.

Keywords: DRAM chips; cryptography; digital forensics; mobile computing; smart phones; Android memory structures; Android-driven smartphones; DRAM remanence effect; Dalvik VM memory structures; FROST tool; application level memory; cold boot attacks; cold-booted Android devices; digital investigation process; forensic memory dumps; full disk encryption; open-source volatility plugins; post-mortem memory analysis; tablet PCs; Androids; Cryptography; Forensics; Kernel; Linux; Random access memory; Smart phones; Android Forensics; Cold Boot Attack; Dalvik VM; Memory Analysis; Post-mortem Analysis; Volatility Plugins (ID#:15-3738)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824082&isnumber=6824069

Sriborrirux, W.; Promsiri, P.; Limmanee, A., "Multiple Secret Key Sharing Based on the Network Coding Technique for an Open Cloud DRM Service Provider," Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference on, pp. 953, 959, 19-21 Dec. 2014. doi: 10.1109/CSE.2014.191 In this paper, we present an open cloud DRM service provider to protect the digital content's copyright. The proposed architecture enables the service providers to use an on-the fly DRM technique with digital signature and symmetric-key encryption. Unlike other similar works, our system does not keep the encrypted digital content but lets the content creators do so in their own cloud storage. Moreover, the key used for symmetric encryption are managed in an extremely secure way by means of the key fission engine and the key fusion engine. The ideas behind the two engines are taken from the works in secure network coding and secret sharing. Although the use of secret sharing and secure network coding for the storage of digital content is proposed in some other works, this paper is the first one employing those ideas only for key management while letting the content be stored in the owner's cloud storage. In addition, we implement an Android SDK for e-Book readers to be compatible with our proposed open cloud DRM service provider. The experimental results demonstrate that our proposal is feasible for the real e-Book market, especially for individual businesses.

Keywords: cloud computing; copyright; cryptography; digital signatures; network coding; Android SDK; cloud storage; digital content copyright; digital signature; e-Book market; e-Book readers; encrypted digital content; key fission engine; key management; multiple secret key sharing; open cloud DRM service provider; secret sharing; secure network coding technique; symmetric encryption; symmetric-key encryption; Cloud computing; Electronic publishing; Encryption; Engines; Licenses; Servers; Digital Rights Management; Key Management; Network Coding; Open Cloud; Secret Sharing (ID#:15-3739)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023701&isnumber=7023510

Haciosman, M.; Bin Ye; Howells, G., "Protecting and Identifying Smartphone Apps Using Icmetrics," Emerging Security Technologies (EST), 2014 Fifth International Conference on, pp. 94, 98, 10-12 Sept. 2014. doi: 10.1109/EST.2014.28 As web-server spoofing is increasing, we investigate a novel technology termed ICmetrics, used to identify fraud for given software/hardware programs based on measurable quantities/features. ICmetrics technology is based on extracting features from digital systems' operation that may be integrated together to generate unique identifiers for each of the systems or create unique profiles that describe the systems' actual behavior. This paper looks at the properties of the several behaviors as a potential ICmetrics features to identify android apps, it presents several quality features which meet the ICmetrics requirements and can be used for encryption key generation. Finally, the paper identifies four android apps and verifies the use of ICmetrics by identifying a spoofed app as a different app altogether.

Keywords: cryptography; smart phones; Android apps; ICmetrics; Web-server spoofing; encryption key generation; fraud identification; hardware programs; identifier generation; smartphone application identification; smartphone application protection; software programs; Androids; Feature extraction; Hardware; Humanoid robots; Security; Smart phones; Software; Android security; ICmetrics; biometrics; encryption; mobile security; security (ID#:15-3740)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982782&isnumber=6982753

Ziegler, D.; Rauter, M.; Stromberger, C.; Teufl, P.; Hein, D., "Do You Think Your Passwords Are Secure?," Privacy and Security in Mobile Systems (PRISMS), 2014 International Conference on, pp. 1, 8, 11-14 May 2014. doi: 10.1109/PRISMS.2014.6970600 Many systems rely on passwords for authentication. Due to numerous accounts for different services, users have to choose and remember a significant number of passwords. Password-Manager applications address this issue by storing the user's passwords. They are especially useful on mobile devices, because of the ubiquitous access to the account passwords. Password-Managers often use key derivation functions to convert a master password into a cryptographic key suitable for encrypting the list of passwords, thus protecting the passwords against unauthorized, off-line access. Therefore, design and implementation flaws in the key derivation function impact password security significantly. Design and implementation problems in the key derivation function can render the encryption on the password list useless, by for example allowing efficient bruteforce attacks, or - even worse - direct decryption of the stored passwords. In this paper, we analyze the key derivation functions of popular Android Password-Managers with often startling results. With this analysis, we want to raise the awareness of developers of security critical apps for security, and provide an overview about the current state of implementation security of security-critical applications.

Keywords: authorisation; cryptography; message authentication; ubiquitous computing; Android password-manager; authentication; bruteforce attack; cryptographic key; direct decryption; encryption; key derivation function; mobile device; password security; security-critical application; ubiquitous access; Androids; Databases; Encryption; Humanoid robots; Usability (ID#:15-3741)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970600&isnumber=6970591

Azfar, A.; Choo, K.-K.R.; Lin Liu, "A Study of Ten Popular Android Mobile VoIP Applications: Are the Communications Encrypted?," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp. 4858, 4867, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.596 Mobile Voice over Internet Protocol (mVoIP) applications have gained increasing popularity in the last few years, with millions of users communicating using such applications (e.g. Skype). Similar to other forms of Internet and telecommunications, mVoIP communications are vulnerable to both lawful and unauthorized interceptions. Encryption is a common way of ensuring the privacy of mVoIP users. To the best of our knowledge, there has been no academic study to determine whether mVoIP applications provide encrypted communications. In this paper, we examine Skype and nine other popular mVoIP applications for Android mobile devices, and analyze the intercepted communications to determine whether the captured voice and text communications are encrypted (or not). The results indicate that most of the applications encrypt text communications. However, voice communications may not be encrypted in six of the ten applications examined.

Keywords: Internet telephony; cryptography; data privacy; mobile computing; smart phones; telecommunication security; Android mobile VoIP applications; Android mobile devices; Internet; Skype; lawful interceptions; mVoIP communications; mobile voice-over-Internet protocol; text communication encryption; unauthorized interceptions; user privacy; Cryptography; Entropy; Google; Mobile communication; Protocols; Smart phones (ID#:15-3742)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759199&isnumber=6758592

Lopes, H.; Chatterjee, M., "Application H-Secure for Mobile Security," Circuits, Systems, Communication and Information Technology Applications (CSCITA), 2014 International Conference on, pp. 370, 374, 4-5 April 2014. doi: 10.1109/CSCITA.2014.6839289 Mobile security is as critical as the PIN number on our ATM card or the lock on our front door. More than our phone itself, the information inside needs safeguarding as well. Not necessarily for scams, but just peace of mind. Android seems to have attracted the most attention from malicious code writers due to its popularity. The flexibility to freely download apps and content has fueled the explosive growth of smart phones and mobile applications but it has also introduced a new risk factor. Malware can mimic popular applications and transfer contacts, photos and documents to unknown destination servers. There is no way to disable the application stores on mobile operating systems. Fortunately for end-users, our smart phones are fundamentally open devices however they can quite easily be hacked. Enterprises now provide business applications on these devices. As a result, confidential business information resides on employee-owned device. Once an employee quits, the mobile operating system wipe-out is not an optimal solution as it will delete both business and personal data. Here we propose H-Secure application for mobile security where one can store their confidential data and files in encrypted form. The encrypted file and encryption key are stored on a Web server so that unauthorized person cannot access the data. If user loses the mobile then he can login into Web and can delete the file and key to stop further decryption process.

Keywords: Android (operating system); authorisation; graphical user interfaces; invasive software; mobile computing; private key cryptography; smart phones; Android smart phones; H-Secure application; Web server; application stores; business applications; business data; confidential business information; confidential data storage; confidential file storage; data access; decryption process; destination servers; employee-owned device; encrypted file; encryption key; free-download apps; free-download content; malicious code; malware; mobile operating system; mobile operating systems; mobile security applications; open devices; personal data; unauthorized person; Authentication; Encryption; Mobile communication; Mobile handsets; Servers; AES Encryption and Decryption; Graphical Password (ID#:15-3743)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839289&isnumber=6839219

Novak, E.; Qun Li, "Near-pri: Private, Proximity Based Location Sharing," INFOCOM, 2014 Proceedings IEEE, pp.37,45, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847922 As the ubiquity of smartphones increases we see an increase in the popularity of location based services. Specifically, online social networks provide services such as alerting the user of friend co-location, and finding a user's k nearest neighbors. Location information is sensitive, which makes privacy a strong concern for location based systems like these. We have built one such service that allows two parties to share location information privately and securely. Our system allows every user to maintain and enforce their own policy. When one party, (Alice), queries the location of another party, (Bob), our system uses homomorphic encryption to test if Alice is within Bob's policy. If she is, Bob's location is shared with Alice only. If she is not, no user location information is shared with anyone. Due to the importance and sensitivity of location information, and the easily deployable design of our system, we offer a useful, practical, and important system to users. Our main contribution is a flexible, practical protocol for private proximity testing, a useful and efficient technique for representing location values, and a working implementation of the system we design in this paper. It is implemented as an Android application with the Facebook online social network used for communication between users.

Keywords: cryptography; mobile computing; smart phones; social networking (online); Android application; Facebook online social network; Near-Pri; homomorphic encryption; location based services; location based systems; location information sensitivity; location value representation; private proximity based location sharing; private proximity testing; smartphone ubiquity; user location information privacy; Cryptography; Facebook; Lead; Polynomials; Privacy; Protocols; Vegetation (ID#:15-3744)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847922&isnumber=6847911

Naito, K.; Mori, K.; Kobayashi, H.; Kamienoo, K.; Suzuki, H.; Watanabe, A., "End-to-end IP Mobility Platform In Application Layer for iOS and Android OS," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, pp. 92, 97, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866554 Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system.

Keywords: Android (operating system);IP networks; client-server systems; cryptography; electronic data interchange; iOS (operating system);real-time systems; smart phones; Android OS;IPv4 networks;IPv6 networks; Linux kernel; NAT routers; NTMobile; OS kernel; application layer; client-server model; encryption key; end-to-end IP mobility platform; end-to-end communication; iOS system; network traversal with mobility; network usage; real time services; secure data exchange; smartphones; transmission delay; Authentication; Encryption; IP networks; Manganese; Relays; Servers (ID#:15-3745)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866554&isnumber=6866537

Swati, K.; Patankar, A.J., "Effective Personalized Mobile Search Using KNN," Data Science & Engineering (ICDSE), 2014 International Conference on, pp. 157, 160, 26-28 Aug. 2014. doi: 10.1109/ICDSE.2014.6974629 Effective Personalized Mobile Search Using KNN, implements an architecture to improve user's personalization effectiveness over large set of data maintaining security of the data. User preferences are gathered through clickthrough data. Clickthrough data obtained is sent to the server in encrypted form. Clickthrough data obtained is classified into content concepts and location concepts. To improve classification and minimize processing time, KNN(K Nearest Neighborhood) algorithm is used. Preferences identified(location and content) are merged to provide effective preferences to the user. System make use of four entropies to balance weight between content concepts and location concepts. System implements client server architecture. Role of client is to collect user queries and to maintain them in files for future reference. User preference privacy is ensured through privacy parameters and also through encryption techniques. Server is responsible to carry out the tasks like training, reranking of the search results obtained and the concept extraction. Experiments are carried out on Android based mobile. Results obtained through experiments show that system significantly gives improved results over previous algorithm for the large set of data maintaining security.

Keywords: client-server systems; cryptography; data privacy; information retrieval; mobile computing; pattern classification; Android based mobile; KNN; classification; clickthrough data; client-server architecture; concept extraction; data maintaining security; encryption techniques; k nearest neighborhood; personalized mobile search; user preference privacy; Androids; Classification algorithms; Mobile communication; Ontologies; Search engines; Servers; Vectors; Clickthrough data; concept; location search; mobile search engine; ontology; personalization; user preferences (ID#:15-3746)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974629&isnumber=6974596

Bheemeswara Rao, K.V.; Ravi, N.; Phani Bhushan, R.; Pramod Kumar, K.; Venkataraman, S., "Bluetooth Technology: ApXLglevel End-To-End Security," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.340,344, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6949858 The innovations in communication and computing technologies are changing the way we carry-out the tasks in our daily lives. These revolutionary and disrupting technologies are available to the users in various hardware form-factors like Smart Phones, Embedded Appliances, Configurable or Customizable add-on devices, etc. One such technology is Bluetooth [1], which enables the users to communicate and exchange various kinds of information like messages, audio, streaming music and file transfer in a Personal Area Network (PAN). Though it enables the user to carry-out these kinds of tasks without much effort and infrastructure requirements, they inherently bring with them the security and privacy concerns, which need to be addressed at different levels. In this paper, we present an application-layer framework, which provides strong mutual authentication of applications, data confidentiality and data integrity independent of underlying operating system. It can make use of the services of different Cryptographic Service Providers (CSP) on different operating systems and in different programming languages. This framework has been successfully implemented and tested on Android Operating System on one end (using Java language) and MS-Windows 7 Operating System on the other end (using ANSI C language), to prove the framework's reliability/compatibility across OS, Programming Language and CSP. This framework also satisfies the three essential requirements of Security, i.e. Confidentiality, Integrity and Availability, as per the NIST Guide to Bluetooth Security specification and enables the developers to suitably adapt it for different kinds of applications based on Bluetooth Technology.

Keywords: Bluetooth; C language; Java; audio streaming; authorisation; computer network reliability; computer network security; cryptography; operating systems (computers);personal area networks; smart phones; ANSI C language; Android operating system; ApXLglevel end-to-end security; Bluetooth security specification; Bluetooth technology; Java language; MS-Windows 7 operating system; NIST Guide; PAN; application-layer framework; audio streaming; communication technologies; computing technologies; configurable add-on devices; cryptographic service providers; customizable add-on devices; data confidentiality; data integrity; embedded appliances; file transfer; framework compatibility; framework reliability; music streaming; operating system; personal area network; privacy concern; programming languages; security concern; smart phones; strong mutual authentication; Encryption; Indexes; Mobile communication; Satellites; Authentication; Binary Payload; Bluetooth; Confidentiality; Mobile Phone; Security (ID#:15-3747)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949858&isnumber=6949766

Hong Li; Limin Sun; Haojin Zhu; Xiang Lu; Xiuzhen Cheng, "Achieving Privacy Preservation In Wi-Fi Fingerprint-Based Localization," INFOCOM, 2014 Proceedings IEEE, pp. 2337, 2345, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848178 WiFi fingerprint-based localization is regarded as one of the most promising techniques for indoor localization. The location of a to-be-localized client is estimated by mapping the measured fingerprint (WiFi signal strengths) against a database owned by the localization service provider. A common concern of this approach that has never been addressed in literature is that it may leak the client's location information or disclose the service provider's data privacy. In this paper, we first analyze the privacy issues of WiFi fingerprint-based localization and then propose a Privacy-Preserving WiFi Fingerprint Localization scheme (PriWFL) that can protect both the client's location privacy and the service provider's data privacy. To reduce the computational overhead at the client side, we also present a performance enhancement algorithm by exploiting the indoor mobility prediction. Theoretical performance analysis and experimental study are carried out to validate the effectiveness of PriWFL. Our implementation of PriWFL in a typical Android smartphone and experimental results demonstrate the practicality and efficiency of PriWFL in real-world environments.

Keywords: computer network security; data privacy; mobile computing; smart phones; wireless LAN; Android smartphone; PriWFL; computational overhead reduction; data privacy ;indoor localization; indoor mobility prediction; localization service provider; performance enhancement algorithm; privacy-preserving WiFi fingerprint localization scheme; real-world environments; signal strengths; Accuracy; Cryptography; Data privacy; Databases; IEEE 802.11 Standards; Privacy; Servers; WiFi fingerprint-based localization; data privacy; homomorphic encryption ;location privacy (ID#:15-3748)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848178&isnumber=6847911

Putra, Made Sumarsana Adi; Budiman, Gelar; Novamizanti, Ledya, "Implementation of Steganography Using LSB With Encrypted And Compressed Text Using TEA-LZW on Android," Computer, Control, Informatics and Its Applications (IC3INA), 2014 International Conference on, pp. 93, 98, 21-23 Oct. 2014. doi: 10.1109/IC3INA.2014.7042607 The development of data communications enabling the exchange of information via mobile devices more easily. Security in the exchange of information on mobile devices is very important. One of the weaknesses in steganography is the capacity of data that can be inserted. With compression, the size of the data will be reduced. In this paper, designed a system application on the Android platform with the implementation of LSB steganography and cryptography using TEA to the security of a text message. The size of this text message may be reduced by performing lossless compression technique using LZW method. The advantages of this method is can provide double security and more messages to be inserted, so it is expected be a good way to exchange information data. The system is able to perform the compression process with an average ratio of 67.42 %. Modified TEA algorithm resulting average value of avalanche effect 53.8%. Average result PSNR of stego image 70.44 dB. As well as average MOS values is 4.8.

Keywords: Android; Compression; Encryption; LSB; LZW; Steganography; TEA (ID#:15-3749)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042607&isnumber=7042583

Shao Shuai; Dong Guowei; Guo Tao; Yang Tianchang; Shi Chenjie, "Analysis on Password Protection in Android Applications," P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, pp.504,507, 8-10 Nov. 2014. doi: 10.1109/3PGCIC.2014.102 Although there has been much research on the leakage of sensitive data in Android applications, most of the existing research focus on how to detect the malware or adware that are intentionally collecting user privacy. There are not much research on analyzing the vulnerabilities of apps that may cause the leakage of privacy. In this paper, we present a vulnerability analyzing method which combines taint analysis and cryptography misuse detection. The four steps of this method are decompile, taint analysis, API call record, cryptography misuse analysis, all of which steps except taint analysis can be executed by the existing tools. We develop a prototype tool PW Exam to analysis how the passwords are handled and if the app is vulnerable to password leakage. Our experiment shows that a third of apps are vulnerable to leak the users' passwords.

Keywords: cryptography; data privacy; mobile computing; smart phones; API call record; Android applications; PW Exam; cryptography misuse analysis; cryptography misuse detection; decompile step; password leakage; password protection; taint analysis; user privacy; vulnerability analyzing method; Androids; Encryption; Humanoid robots; Privacy; Smart phones; Android apps; leakage; password; vulnerability (ID#:15-3750)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024636&isnumber=7024297

Rastogi, V.; Yan Chen; Xuxian Jiang, "Catch Me If You Can: Evaluating Android Anti-Malware Against Transformation Attacks," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 1, pp. 99, 108, Jan. 2014. doi: 10.1109/TIFS.2013.2290431 Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.

Keywords: invasive software; mobile computing; mobile handsets; operating systems (computers);Android antimalware; DroidChameleon; commercial mobile antimalware products; malware authors; malware detection; malware transformation; mobile devices; mobile malware threats; next-generation solutions; obfuscation techniques; transformation attacks; Androids; Encryption; Humanoid robots; Malware; Mobile communication; Android; Mobile; anti-malware; malware (ID#:15-3751)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6661334&isnumber=6684617

Adibi, S., "Comparative Mobile Platforms Security Solutions," Electrical and Computer Engineering (CCECE), 2014 IEEE 27th Canadian Conference on, pp.1,6, 4-7 May 2014. doi: 10.1109/CCECE.2014.6900963 Mobile platform security solution has become especially important for mobile computing paradigms, due to the fact that increasing amounts of private and sensitive information are being stored on the smartphones' on-device memory or MicroSD/SD cards. This paper aims to consider a comparative approach to the security aspects of the current smartphone systems, including: iOS, Android, BlackBerry (QNX), and Windows Phone.

Keywords: mobile computing; security of data; Android; BlackBerry; QNX; Windows Phone; comparative mobile platforms; iOS; mobile computing paradigm; mobile platform security solution; private information; sensitive information; smart phone; Androids; Encryption; Kernel; Mobile communication; Smart phones (ID#:15-3752)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900963&isnumber=6900900

Shao Shuai; Dong Guowei; Guo Tao; Yang Tianchang; Shi Chenjie, "Modelling Analysis and Auto-detection of Cryptographic Misuse in Android Applications," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp. 75, 80, 24-27 Aug. 2014

doi: 10.1109/DASC.2014.22 Cryptographic misuse affects a sizeable portion of Android applications. However, there is only an empirical study that has been made about this problem. In this paper, we perform a systematic analysis on the cryptographic misuse, build the cryptographic misuse vulnerability model and implement a prototype tool Crypto Misuse Analyser (CMA). The CMA can perform static analysis on Android apps and select the branches that invoke the cryptographic API. Then it runs the app following the target branch and records the cryptographic API calls. At last, the CMA identifies the cryptographic API misuse vulnerabilities from the records based on the pre-defined model. We also analyze dozens of Android apps with the help of CMA and find that more than a half of apps are affected by such vulnerabilities.

Keywords: Android (operating system);application program interfaces; cryptography; program diagnostics; Android application; CMA; cryptographic API; cryptographic misuse autodetection; cryptographic misuse vulnerability model; prototype tool crypto misuse analyser; static analysis; Analytical models; Androids; Encryption; Humanoid robots; Runtime; Android; Cryptographic Misuse; Modelling Analysis; Vulnerability (ID#:15-3753)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945307&isnumber=6945641

Marghescu, A.; Teseleanu, G.; Svasta, P., "Cryptographic Key Generator Candidates Based On Smartphone Built-In Sensors," Design and Technology in Electronic Packaging (SIITME), 2014 IEEE 20th International Symposium for , vol., no., pp.239,243, 23-26 Oct. 2014. doi: 10.1109/SIITME.2014.6967037 Random numbers represent one of the most sensible part of a cryptographic system, since the cryptographic keys must be entirely based on them. The security of a communication relies on the key that had been established between two users. If an attacker is able to deduce that key, the communication is compromised. This is why key generation must completely rely on random number generators, so that nobody can deduce the. This paper will describe a set of public and free Random Number Generators (RNG) within Android-based Smartphones by exploiting different sensors, along with the way of achieving this scope. Moreover, this paper will present some conclusive tests and results over them.

Keywords: Android (operating system);cryptography; random number generation; smart phones; Android-based smartphones; RNG; cryptographic key generator candidates; cryptographic system; random number generators; smartphone built-in sensors; Ciphers; Encryption; Generators; Random sequences; Sensors; Cryptography; RNG; Random Number Generators; Sensors; Smartphone (ID#:15-3754)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6967037&isnumber=6966980

Luchian, E.; Terebes, R.; Cremene, M., "Design and implementation of a mobile VoIP system on Android," Electronics and Telecommunications (ISETC), 2014 11th International Symposium on , vol., no., pp.1,4, 14-15 Nov. 2014. doi: 10.1109/ISETC.2014.7010772 The paper presents a secure solution that provides VoIP service for mobile users, handling both pre-call and mid-call mobility. Pre-call mobility is implemented using a presence server that acts as a DNS for the moving users. Our approach also detects any change in the attachment point of the moving users and transmits it to the peer entity by in band signaling using socket communications. For true mid-call mobility we also employ buffering techniques that store packets for the duration of the signaling procedure. The solution was implemented for Android devices and it uses ASP technology for the server part.

Keywords: {Android (operating system);Internet telephony; mobility management (mobile radio);peer-to-peer computing; ASP technology; Android devices; DNS; VoIP service; buffering techniques; in band signaling; mobile VoIP system; mobile users; moving users; peer entity; pre-call mobility; signaling procedure; socket communications; true mid-call mobility; Androids; Cryptography; Graphical user interfaces; IP networks; Protocols; Servers; Smart phones; Android; VoIP; encryption; mobility; sockets (ID#:15-3755)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7010772&isnumber=7010721

Naito, K.; Mori, K.; Kobayashi, H.; Kamienoo, K.; Suzuki, H.; Watanabe, A., "End-to-End IP Mobility Platform In Application Layer for iOS and Android OS," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, pp.92,97, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866554 Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system.

Keywords: Android (operating system);IP networks; client-server systems; cryptography; electronic data interchange; iOS (operating system);real-time systems; smart phones; Android OS;IPv4 networks; IPv6 networks; Linux kernel; NAT routers; NTMobile; OS kernel; application layer; client-server model; encryption key; end-to-end IP mobility platform; end-to-end communication; iOS system; network traversal with mobility; network usage; real time services; secure data exchange; smartphones; transmission delay; Authentication; Encryption; IP networks; Manganese; Relays; Servers (ID#:15-3756)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866554&isnumber=6866537


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Anonymity and Privacy in Wireless Networks


 
SoS Newsletter Logo

Anonymity & Privacy in Wireless Networks

 

Minimizing privacy risk is one of the major problems in the development of social media and hand-held smart phone technologies, vehicle ad hoc networks, and wireless sensor networks. These research articles were presented in 2014.  

 

Kazemi, M.; Azmi, R., "Privacy Preserving And Anonymity In Multi Sinks Wireless Sensor Networks With Master Sink," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp. 1,7, 11-13 July 2014. doi: 10.1109/ICCCNT.2014.6963107

Abstract: The wireless network is become larger than past. So in the recent years the wireless with multiple sinks is more useful. The anonymity and privacy in this network is a challenge now. In this paper, we propose a new method for anonymity in multi sink wireless sensor network. In this method we use layer encryption to provide source and event privacy and we use a label switching routing method to provide sink anonymity in each cluster. A master sink that is a powerful base station is used to connect sinks to each other.

Keywords: telecommunication network routing; telecommunication security; wireless sensor networks; layer encryption; master sink; multisinks wireless sensor networks; powerful base station; privacy anonymity; privacy preserving; sink anonymity; switching routing method; Encryption; Network topology; Privacy; Protocols; Wireless sensor networks; Anonymity; Label switching; Layer encryption; Multi sinks; Privacy   (ID#:15-3954)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963107&isnumber=6962988

 

Shinganjude, R.D.; Theng, D.P., "Inspecting the Ways of Source Anonymity in Wireless Sensor Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.705, 707, 7-9 April 2014. doi: 10.1109/CSNT.2014.148

Abstract: Sensor networks mainly deployed to monitor and report real events, and thus it is very difficult and expensive to achieve event source anonymity for it, as sensor networks are very limited in resources. Data obscurity i.e. the source anonymity problem implies that an unauthorized observer must be unable to detect the origin of events by analyzing the network traffic; this problem has emerged as an important topic in the security of wireless sensor networks. This work inspects the different approaches carried for attaining the source anonymity in wireless sensor network, with variety of techniques based on different adversarial assumptions. The approach meeting the best result in source anonymity is proposed for further improvement in the source location privacy. The paper suggests the implementation of most prominent and effective LSB Steganography technique for the improvement.

Keywords: steganography; telecommunication traffic; wireless sensor networks; LSB steganography technique; adversarial assumptions; event source anonymity; network traffic; source location privacy; wireless sensor networks; Communication systems; Wireless sensor network; anonymity; coding theory; persistent dummy traffic; statistical test; steganography   (ID#:15-3955)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821490&isnumber=6821334

 

Manjula, R.; Datta, R., "An Energy-Efficient Routing Technique For Privacy Preservation Of Assets Monitored With WSN," Students' Technology Symposium (TechSym), 2014 IEEE, pp.325,330, Feb. 28 2014-March 2 2014. doi: 10.1109/TechSym.2014.6808069

Abstract: Wireless Sensor Networks (WSNs) are deployed to monitor the assets (endangered species) and report the locations of these assets to the Base Station (BS) also known as Sink. The hunter (adversary) attacks the network at one or two hops away from the Sink, eavesdrops the wireless communication links and traces back to the location of the asset to capture them. The existing solutions proposed to preserve the privacy of the assets lack in energy efficiency as they rely on random walk routing technique and fake packet injection technique so as to obfuscate the hunter from locating the assets. In this paper we present an energy efficient privacy preserved routing algorithm where the event (i.e., asset) detected nodes called as source nodes report the events' location information to the Base Station using phantom source (also known as phantom node) concept and a-angle anonymity concept. Routing is done using existing greedy routing protocol. Comparison through simulations shows that our solution reduces the energy consumption and delay while maintaining the same level of privacy as that of two existing popular techniques.

Keywords: data privacy; energy conservation; routing protocols; telecommunication power management; telecommunication security; wireless sensor networks; WSN; asset monitoring; base station; endangered species monitoring; energy efficient routing technique; fake packet injection technique; phantom node; phantom source; privacy preservation; wireless sensor network; Base stations; Delays; Monitoring; Phantoms; Privacy; Routing; Routing protocols   (ID#:15-3956)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808069&isnumber=6807895

 

Xiaoguang Niu; Chuanbo Wei; Weijiang Feng; Qianyuan Chen, "OSAP: Optimal-Cluster-Based Source Anonymity Protocol In Delay-Sensitive Wireless Sensor Networks," Wireless Communications and Networking Conference (WCNC), 2014 IEEE, pp.2880,2885, 6-9 April 2014. doi: 10.1109/WCNC.2014.6952906

Abstract: For wireless sensor networks deployed to monitor and report real events, event source-location privacy (SLP) is a critical security property. Previous work has proposed schemes based on fake packet injection such as FitProbRate and TFS, to realize event source anonymity for sensor networks under a challenging attack model where a global attacker is able to monitor the traffic in the entire network. Although these schemes can well protect the SLP, there exists imbalance in traffic or delay. In this paper, we propose an Optimal-cluster-based Source Anonymity Protocol (OSAP), which can achieve a tradeoff between network traffic and real event report latency through adjusting the transmission rate and the radius of unequal clusters, to reduce the network traffic. The simulation results demonstrate that OSAP can significantly reduce the network traffic and the delay meets the system requirement.

Keywords: data privacy; protocols; telecommunication security; wireless sensor networks; OSAP; challenging attack model; delay sensitive wireless sensor networks; event source anonymity; event source location privacy; fake packet injection; global attacker; network traffic; optimal cluster based source anonymity protocol; real event report latency; Base stations; Delays; Mobile communication; Mobile computing; Security; Telecommunication traffic; Wireless networks; cluster-based wireless sensor network; fake packet injection; global attacker; network traffic reduction; source anonymity   (ID#:15-3957)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6952906&isnumber=6951847

 

Chang-Ji Wang; Dong-Yuan Shi; Xi-Lei Xu, "Pseudonym-Based Cryptography and Its Application in Vehicular Ad Hoc Networks," Broadband and Wireless Computing, Communication and Applications (BWCCA), 2014 Ninth International Conference on, pp.253, 260, 8-10 Nov. 2014. doi: 10.1109/BWCCA.2014.72

Abstract: As the cornerstone of the future intelligent transportation system, vehicular ad hoc networks (VANETs) have attracted intensive attention from the academic and industrial research communities in recent years. For widespread deployment of VANETs, security and privacy issues must be addressed properly. In this paper, we introduce the notion of pseudonym-based cryptography, and present a provable secure pseudonym-based cryptosystems with a trusted authority that includes a pseudonym-based multi-receiver encryption scheme, a pseudonym-based signature scheme, and a pseudonym-based key establishment protocol. We then propose a secure and efficient data access scheme for VANETs based on cooperative caching technology and our proposed pseudonym-based cryptosystems. On the one hand, the efficiency of data access are greatly improved by allowing the sharing and coordination of cached data among multiple vehicles. On the other hand, anonymity of the vehicles, data confidentiality, integrity and non-repudiation are guaranteed by employing our proposed pseudonym-based cryptosystems. Simulation results have shown that our proposed pseudonym-based cryptosystems are suitable to the VANETs environment.

Keywords: cryptographic protocols; vehicular ad hoc networks; VANET; cooperative caching technology; data access scheme; provable secure pseudonym-based cryptosystems; pseudonym-based key establishment protocol; pseudonym-based multi-receiver encryption scheme; trusted authority; vehicular ad hoc networks; Encryption; Privacy; Protocols; Vehicles; Vehicular ad hoc networks; cooperative caching; onion packet; pseudonym-based key establishment protocol; pseudonym-based multi-receiver encryption scheme; pseudonym-based signature scheme; vehicular ad-hoc networks   (ID#:15-3958)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016077&isnumber=7015998

 

Shahare, P.C.; Chavhan, N.A., "An Approach to Secure Sink Node's Location Privacy in Wireless Sensor Networks," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.748,751, 7-9 April 2014. doi: 10.1109/CSNT.2014.157

Abstract: Wireless Sensor Network has a wide range of applications including environmental monitoring and data gathering in hostile environments. This kind of network is easily leaned to different external and internal attacks because of its open nature. Sink node is a receiving and collection point that gathers data from the sensor nodes present in the network. Thus, it forms bridge between sensors and the user. A complete sensor network can be made useless if this sink node is attacked. To ensure continuous usage, it is very important to preserve the location privacy of sink nodes. A very good approach for securing location privacy of sink node is proposed in this paper. The proposed scheme tries to modify the traditional Blast technique by adding shortest path algorithm and an efficient clustering mechanism in the network and tries to minimize the energy consumption and packet delay.

Keywords: delays; power consumption; wireless sensor networks; Blast technique; clustering mechanism; continuous usage; data gathering; energy consumption; environmental monitoring; external attacks; hostile environments; internal attacks; location privacy; packet delay; secure sink node; shortest path algorithm; wireless sensor networks; Base stations; Clustering algorithms; Computer science; Privacy; Receivers; Security; Wireless sensor networks; Anonymity; Sink node location privacy; Wireless sensor network   (ID#:15-3959)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821499&isnumber=6821334

 

Tomandl, A.; Herrmann, D.; Fuchs, K.-P.; Federrath, H.; Scheuer, F., "VANETsim: An Open Source Simulator For Security And Privacy Concepts In VANETs," High Performance Computing & Simulation (HPCS), 2014 International Conference on, pp.543,550, 21-25 July 2014. doi: 10.1109/HPCSim.2014.6903733

Abstract: Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.

Keywords: data privacy; discrete event simulation; graphical user interfaces; mobile computing; public domain software; vehicular ad hoc networks; VANETsim; application-level privacy implications; application-level security implications; event-driven simulation platform; graphical user interface; open source simulator; road safety; vehicular ad hoc networks; vehicular communications; Analytical models ;Graphical user interfaces; Privacy; Roads; Security; Vehicles; Vehicular ad hoc networks; Anonymity; Car2Car;Intrusion and Attack Detection; Privacy; Privacy-Enhancing Technology; Security; Security in Mobile and Wireless Networks; Simulator; VANET; Vehicular Communication   (ID#:15-3960)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903733&isnumber=6903651

 

Ward, J.R.; Younis, M., "A Metric for Evaluating Base Station Anonymity in Acknowledgement-Based Wireless Sensor Networks," Military Communications Conference (MILCOM), 2014 IEEE, pp. 216, 221, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.41

Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial automation and product tracking to intrusion detection at a hostile border. A typical WSN topology allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Although a variety of countermeasures have been proposed to improve BS anonymity, those techniques are typically evaluated based on a WSN that does not employ acknowledgements. In this paper we propose an enhanced evidence theory metric called Acknowledgement-Aware Evidence Theory (AAET) that more accurately characterizes BS anonymity in WSNs employing acknowledgements. We demonstrate AAET's improved robustness to a variety of configurations through simulation.

Keywords: telecommunication security; telecommunication traffic; wireless sensor networks; WSN topology; acknowledgement aware evidence theory; acknowledgement based wireless sensor networks; base station anonymity; enhanced evidence theory metric; network traffic flow; traffic analysis technique; Correlation; Measurement; Media Access Protocol; Sensors; Standards; Wireless sensor networks; acknowledged communication ;anonymity; location privacy; wireless sensor network   (ID#:15-3961)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956762&isnumber=6956719

 

Tomandl, A.; Herrmann, D.; Federrath, H., "PADAVAN: Privacy-Aware Data Accumulation for Vehicular Ad-hoc Networks," Wireless and Mobile Computing, Networking and Communications (WiMob), 2014 IEEE 10th International Conference onpp.487, 493, 8-10 Oct. 2014. doi: 10.1109/WiMOB.2014.6962215

Abstract: In this paper we introduce PADAVAN, a novel anonymous data collection scheme for Vehicular Ad Hoc Networks (VANETs). PADAVAN allows users to submit data anonymously to a data consumer while preventing adversaries from submitting large amounts of bogus data. PADAVAN is comprised of an n-times anonymous authentication scheme, mix cascades and various principles to protect the privacy of the submitted data itself. Furthermore, we evaluate the effectiveness of limiting an adversary to a fixed amount of messages.

Keywords: data privacy; telecommunication security; vehicular ad hoc networks; PADAVAN; VANET; anonymous authentication scheme; anonymous data collection scheme; data consumer; privacy-aware data accumulation; submitted data privacy protection; vehicular ad-hoc networks; Authentication; Data collection; Data privacy; Junctions; Privacy; Sensors; Vehicles; Anonymity; Data Collection; Privacy; Security; VANET; Vehicular Communication   (ID#:15-3962)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6962215&isnumber=6962120

 

Shin-Ming Cheng; Cheng-Han Ho; Shannon Chen; Shih-Hao Chang, "Distributed Anonymous Authentication In Heterogeneous Networks," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp.505,510, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906408

Abstract: Nowadays, the design of a secure access authentication protocol in heterogeneous networks achieving seamless roaming across radio access technologies for mobile users (MUs) is a major technical challenge. This paper proposes a Distributed Anonymous Authentication (DAA) protocol to resolve the problems of heavy signaling overheads and long signaling delay when authentication is executed in a centralized manner. By applying MUs and point of attachments (PoAs) as group members, the adopted group signature algorithms provide identity verification directly without sharing secrets in advance, which significantly reduces signaling overheads. Moreover, MUs sign messages on behalf of the group, so that anonymity and unlinkability against PoAs are provided and thus privacy is preserved. Performance analysis confirm the advantages of DAA over existing solutions.

Keywords: message authentication; next generation networks; protocols; radio access networks; telecommunication security; telecommunication signaling; DAA protocol; adopted group signature algorithms; distributed anonymous authentication protocol; group members; heterogeneous networks; identity verification; mobile users; radio access technologies; seamless roaming; secure access authentication protocol; signaling delay; signaling overheads; Educational institutions; Handover; anonymous authentication; group signature; heterogeneous networks   (ID#:15-3963)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906408&isnumber=6906315

 

Ding Wang; Ping Wang; Jing Liu, "Improved Privacy-Preserving Authentication Scheme For Roaming Service In Mobile Networks," Wireless Communications and Networking Conference (WCNC), 2014 IEEE, pp.3136,3141, 6-9 April 2014. doi: 10.1109/WCNC.2014.6953015

Abstract: User authentication is an important security mechanism that allows mobile users to be granted access to roaming service offered by the foreign agent with assistance of the home agent in mobile networks. While security-related issues have been well studied, how to preserve user privacy in this type of protocols still remains an open problem. In this paper, we revisit the privacy-preserving two-factor authentication scheme presented by Li et al. at WCNC 2013. We show that, despite being armed with a formal security proof, this scheme actually cannot achieve the claimed feature of user anonymity and is insecure against offline password guessing attacks, and thus, it is not recommended for practical applications. Then, we figure out how to fix these identified drawbacks, and suggest an enhanced scheme with better security and reasonable efficiency. Further, we conjecture that under the non-tamper-resistant assumption of the smart cards, only symmetric-key techniques are intrinsically insufficient to attain user anonymity.

Keywords: cryptography; message authentication; mobile radio; telecommunication security; improved privacy-preserving two-factor authentication scheme; mobile networks; mobile users  nontamper-resistant assumption; offline password guessing attacks ;roaming service; security mechanism; security-related issues; smart cards; symmetric-key techniques; user anonymity; user authentication; Authentication; Mobile communication; Mobile computing; Protocols; Roaming; Smart cards; Mobile networks; Password authentication; Roaming service; Smart card; User anonymity   (ID#:15-3964)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6953015&isnumber=6951847

 

Jagdale, B.N.; Bakal, J.W., "Synergetic Cloaking Technique In Wireless Network For Location Privacy," Industrial and Information Systems (ICIIS), 2014 9th International Conference on, pp.1, 6, 15-17 Dec. 2014. doi: 10.1109/ICIINFS.2014.7036480

Abstract: Mobile users access location services from a location based server. While doing so, the user's privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user's identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.

Keywords: data privacy; mobile ad hoc networks; mobility management (mobile radio);probability; telecommunication security; telecommunication services; ad-hoc networks; cloaking area sizes;location based server; location privacy; location service provider ;location-based services;mobile devices;mobile network; mobile users; query originator; query sender; synergetic cloaking technique;user privacy; wireless network; Ad hoc networks; Cryptography; Databases; Educational institutions; Mobile communication; Privacy ;Servers; Cloaking; Collaboration; Location Privacy; Mobile Networks; Performance   (ID#:15-3965)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036480&isnumber=7036459

 

Ming Chen; Wenzhong Li; Zhuo Li; Sanglu Lu; Daoxu Chen, "Preserving Location Privacy Based On Distributed Cache Pushing," Wireless Communications and Networking Conference (WCNC), 2014 IEEE, pp.3456,3461, 6-9 April 2014. doi: 10.1109/WCNC.2014.6953141

Abstract: Location privacy preservation has become an important issue in providing location based services (LBSs). When the mobile users report their locations to the LBS server or the third-party servers, they risk the leak of their location information if such servers are compromised. To address this issue, we propose a Location Privacy Preservation Scheme (LPPS) based on distributed cache pushing which is based on Markov Chain. The LPPS deploys distributed cache proxies in the most frequently visited areas to store the most popular location-related data and pushes them to mobile users passing by. In the way that the mobile users receive the popular location-related data from the cache proxies without reporting their real locations, the users' location privacy is well preserved, which is shown to achieve k-anonymity. Extensive experiments illustrate that the proposed LPPS achieve decent service coverage ratio and cache hit ratio with low communication overhead.

Keywords: Markov processes; cache storage; data privacy; mobile computing; mobility management (mobile radio);Markov chain; distributed cache pushing ;location based service; location privacy; location privacy preservation scheme; mobile users; Computer architecture; Distributed databases; Markov processes; Mobile communication; Privacy; Servers; Trajectory   (ID#:15-3970)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6953141&isnumber=6951847

 

Ward, J.R.; Younis, M., "Examining the Effect of Wireless Sensor Network Synchronization on Base Station Anonymity," Military Communications Conference (MILCOM), 2014 IEEE, pp.204,209, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.39

Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).

Keywords: protocols; synchronisation; telecommunication network topology; telecommunication security; telecommunication traffic; wireless sensor networks; BS anonymity improvement; RBS; TPSN; WSN topology; base station anonymity; data sources; evidence theory analysis; network traffic flow; reference broadcast synchronization; security mechanisms; timing-synch protocol for sensor networks; traffic analysis techniques; wireless sensor network synchronization; Protocols; Receivers; Sensors; Synchronization; Wireless communication; Wireless sensor networks; RBS; TPSN; anonymity; location privacy; synchronization; wireless sensor network   (ID#:15-3971)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956760&isnumber=6956719

 

Banerjee, D.; Bo Dong; Biswas, S.; Taghizadeh, M., "Privacy-Preserving Channel Access Using Blindfolded Packet Transmissions," Communication Systems and Networks (COMSNETS), 2014 Sixth International Conference on, pp.1,8, 6-10 Jan. 2014. doi: 10.1109/COMSNETS.2014.6734887

Abstract: This paper proposes a novel wireless MAC-layer approach towards achieving channel access anonymity. Nodes autonomously select periodic TDMA-like time-slots for channel access by employing a novel channel sensing strategy, and they do so without explicitly sharing any identity information with other nodes in the network. An add-on hardware module for the proposed channel sensing has been developed and the proposed protocol has been implemented in Tinyos-2.x. Extensive evaluation has been done on a test-bed consisting of Mica2 hardware, where we have studied the protocol's functionality and convergence characteristics. The functionality results collected at a sniffer node using RSSI traces validate the syntax and semantics of the protocol. Experimentally evaluated convergence characteristics from the Tinyos test-bed were also found to be satisfactory.

Keywords: data privacy; time division multiple access; wireless channels; wireless sensor networks;Mica2 hardware;RSSI;Tinyos-2x test-bed implementation; add-on hardware module; blindfolded packet transmission; channel sensing strategy; periodic TDMA-Iike time-slot; privacy-preserving channel access anonymity; protocol; wireless MAC-layer approach; Convergence; Cryptography;Equations;Google;Heating;Interference;Noise;Anonymity;MAC protocols; Privacy; TDMA   (ID#:15-3972)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6734887&isnumber=6734849

 

Umam, E.G.; Sriramb, E.G., "Robust Encryption Algorithm Based SHT In Wireless Sensor Networks," Information Communication and Embedded Systems (ICICES), 2014 International Conference on,  pp.1,5, 27-28 Feb. 2014.  doi: 10.1109/ICICES.2014.7034145

Abstract: In bound applications, the locations of events reportable by a device network have to be compelled to stay anonymous. That is, unauthorized observers should be unable to notice the origin of such events by analyzing the network traffic. The authors analyze 2 forms of downsides: Communication overhead and machine load problem. During this paper, the authors give a new framework for modeling, analyzing, and evaluating obscurity in device networks. The novelty of the proposed framework is twofold: initial, it introduces the notion of "interval indistinguishability" and provides a quantitative live to model obscurity in wireless device networks; second, it maps supply obscurity to the applied mathematics downside the authors showed that the present approaches for coming up with statistically anonymous systems introduce correlation in real intervals whereas faux area unit unrelated. The authors show however mapping supply obscurity to consecutive hypothesis testing with nuisance Parameters ends up in changing the matter of exposing non-public supply data into checking out associate degree applicable knowledge transformation that removes or minimize the impact of the nuisance data victimization sturdy cryptography algorithmic rule. By doing therefore, the authors remodeled the matter of analyzing real valued sample points to binary codes, that opens the door for committal to writing theory to be incorporated into the study of anonymous networks. In existing work, unable to notice unauthorized observer in network traffic. However this work in the main supported enhances their supply obscurity against correlation check, the most goal of supply location privacy is to cover the existence of real events.

Keywords: cryptography; wireless sensor networks; SHT; communication overhead; device network obscurity; hypothesis testing; Interval indistinguishability; location privacy; machine load problem; network traffic; nuisance data victimization sturdy cryptography algorithmic rule; robust encryption algorithm; wireless device network; wireless sensor networks; Computer hacking; Correlation; Encryption; Privacy; Telecommunication traffic; Testing; Wireless sensor networks; anonymity; consecutive hypothesis testing; nuisance parameters and committal to writing theory; privacy; sturdy cryptography algorithmic rule; supply location   (ID#:15-3973)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034145&isnumber=7033740

 

Vijayan, A.; Thomas, T., "Anonymity, Unlinkability And Unobservability In Mobile Ad Hoc Networks," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp. 1880, 1884, 3-5 April 2014.

doi: 10.1109/ICCSP.2014.6950171

Abstract: Mobile ad hoc networks have the features of open medium, dynamic topology, cooperative algorithms, lack of centralized monitoring etc. Due to these, mobile ad hoc networks are much vulnerable to security attacks when compared to wired networks. There are various routing protocols that have been developed to cope up with the limitations imposed by the ad hoc networks. But none of these routing schemes provide complete unlinkability and unobservability. In this paper we have done a survey about anonymous routing and secure communications in mobile ad hoc networks. Different routing protocols are analyzed based on public/private key pairs and cryptosystems, within that USOR can well protect user privacy against both inside and outside attackers. It is a combination of group signature scheme and ID based encryption scheme. These are run during the route discovery process. We implement USOR on ns2, and then its performance is compared with AODV.

Keywords: cooperative communication; mobile ad hoc networks; private key cryptography; public key cryptography; routing protocols; telecommunication network topology; telecommunication security; AODV; ID based encryption scheme; USOR; anonymous routing; centralized monitoring; cooperative algorithms; cryptosystems; dynamic topology; group signature scheme; mobile ad hoc networks; ns2;public-private key pairs; route discovery process; routing protocols; routing schemes; secure communications; security attacks; user privacy; wired networks; Ad hoc networks; Communication system security; Cryptography; Routing; Routing protocols; Wireless communication; Anonymity; routing protocols; security; unlinkability; unobservability   (ID#:15-3974)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950171&isnumber=6949766

 

Rahman, S.M.M.; Kamruzzaman, S.M.; Almogren, A.; Alelaiwi, A.; Alamri, A.; Alghamdi, A., "Anonymous and Secure Communication Protocol for Cognitive Radio Ad Hoc Networks," Multimedia (ISM), 2014 IEEE International Symposium on, pp. 393, 398, 10-12 Dec. 2014.

doi: 10.1109/ISM.2014.85

Abstract: Cognitive radio (CR) networks are becoming an increasingly important part of the wireless networking landscape due to the ever-increasing scarcity of spectrum resources throughout the world. Nowadays CR media is becoming popular wireless communication media for disaster recovery communication network. Although the operational aspects of CR are being explored vigorously, its security aspects have gained less attention to the research community. The existing research on CR network mainly focuses on the spectrum sensing and allocation, energy efficiency, high throughput, end-to-end delay and other aspect of the network technology. But, very few focuses on the security aspect and almost none focus on the secure anonymous communication in CR networks (CRNs). In this research article we would focus on secure anonymous communication in CR ad hoc networks (CRANs). We would propose a secure anonymous routing for CRANs based on pairing based cryptography which would provide source node, destination node and the location anonymity. Furthermore, the proposed research would protect different attacks those are feasible on CRANs.

Keywords: ad hoc networks; cognitive radio; cryptographic protocols; routing protocols; telecommunication security; CR ad hoc networks; CR media; CRAN; anonymous-secure communication protocol; cognitive radio ad hoc networks; destination node; disaster recovery communication network; end-to-end delay; energy efficiency; location anonymity; network throughput; operational aspect; pairing-based cryptography; secure anonymous routing; security aspect;source node; spectrum allocation; spectrum resource scarcity; spectrum sensing; wireless communication media; Ad hoc networks; Cognitive radio; Cryptography; Privacy; Protocols; Routing; anonymous routing; cognitive radio (CR) networks; pairing-based cryptography; secure communication   (ID#:15-3975)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033057&isnumber=7032934

 

Tianyu Zhao; Chang Chen; Lingbo Wei; Mengke Yu, "An Anonymous Payment System To Protect The Privacy Of Electric Vehicles," Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on, pp.1,6, 23-25 Oct. 2014.

doi: 10.1109/WCSP.2014.6992208

Abstract: Electric vehicle is the automobile that powered by electrical energy stored in batteries. Due to the frequent recharging, vehicles need to be connected to the recharging infrastructure while they are parked. This may disclose drivers' privacy, such as their location that drivers may want to keep secret. In this paper, we propose a scheme to enhance the privacy of the drivers using anonymous credential technique and Trusted Platform Module(TPM). We use anonymous credential technique to achieve the anonymity of vehicles such that drivers can anonymously and unlinkably recharge their vehicles. We add some attributes to the credential such as the type of the battery in the vehicle in case that the prices of different batteries are different. We use TPM to omit a blacklist such that the company that offer the recharging service(Energy Provider Company, EPC) does not need to conduct a double spending detection.

Keywords: battery powered vehicles; cryptography; data privacy; driver information systems; financial management; secondary cells; trusted computing; EPC; Energy Provider Company; TPM; anonymous credential technique; anonymous payment system; automobile; battery; double spending detection; driver privacy; electric vehicles; electrical energy; privacy protection; recharging infrastructure; recharging service; trusted platform module; Authentication; Batteries; Privacy; Protocols; Registers; Servers; Vehicles   (ID#:15-3976)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992208&isnumber=6992003

 

Wiesner, K.; Feld, S.; Dorfmeister, F.; Linnhoff-Popien, C., "Right To Silence: Establishing Map-Based Silent Zones For Participatory Sensing," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2014 IEEE Ninth International Conference on, pp. 1,6, 21-24 April 2014

doi: 10.1109/ISSNIP.2014.6827657

Abstract: Participatory sensing tries to create cost-effective, large-scale sensing systems by leveraging sensors embedded in mobile devices. One major challenge in these systems is to protect the users' privacy, since users will not contribute data if their privacy is jeopardized. Especially location data needs to be protected if it is likely to reveal information about the users' identities. A common solution is the blinding out approach that creates so-called ban zones in which location data is not published. Thereby, a user's important places, e.g., her home or workplace, can be concealed. However, ban zones of a fixed size are not able to guarantee any particular level of privacy. For instance, a ban zone that is large enough to conceal a user's home in a large city might be too small in a less populated area. For this reason, we propose an approach for dynamic map-based blinding out: The boundaries of our privacy zones, called Silent Zones, are determined in such way that at least k buildings are located within this zone. Thus, our approach adapts to the habitat density and we can guarantee k-anonymity in terms of surrounding buildings. In this paper, we present two new algorithms for creating Silent Zones and evaluate their performance. Our results show that especially in worst case scenarios, i.e., in sparsely populated areas, our approach outperforms standard ban zones and guarantees the specified privacy level.

Keywords: cartography; data privacy; mobile computing; performance evaluation; security of data; wireless sensor networks; ban zones; dynamic map-based blinding out; embedded sensors; habitat density; k-anonymity; large-scale sensing systems; location data; map-based silent zones; mobile devices; mobile phones; participatory sensing; performance evaluation; privacy level; user privacy protection; Buildings; Cities and towns; Data privacy; Mobile communication; Mobile handsets; Privacy; Sensors   (ID#:15-3977)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827657&isnumber=6827478


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Attribution (2014 Year in Review) Part 1

 

 
SoS Newsletter Logo

Attribution
(2014 Year in Review)
Part 1

 

Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics.  The research presented here addresses a number of issues in each context published in 2014. 

 

 

Ya Zhang; Yi Wei; Jianbiao Ren, "Multi-touch Attribution in Online Advertising with Survival Theory," Data Mining (ICDM), 2014 IEEE International Conference on, pp. 687, 696, 14-17 Dec. 2014. doi: 10.1109/ICDM.2014.130

Abstract: Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.

Keywords: Internet; advertising data processing; data handling; probability; commercial advertising monitoring company;d ata-driven multitouch attribution models; digital advertising; online advertising; probabilistic framework; rule-based attribution models; survival theory; user conversion probability prediction; user interaction data; Advertising; Data models; Gold; Hazards; Hidden Markov models; Kernel; Predictive models; Multi-touch attribution; Online Advertising; Survival theory   (ID#:15-3978)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023386&isnumber=7023305

 

Rivera, J.; Hare, F., "The Deployment Of Attribution Agnostic Cyberdefense Constructs And Internally Based Cyberthreat Countermeasures," Cyber Conflict (CyCon 2014), 2014 6th International Conference On, pp.99, 116, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916398

Abstract: Conducting active cyberdefense requires the acceptance of a proactive framework that acknowledges the lack of predictable symmetries between malicious actors and their capabilities and intent. Unlike physical weapons such as firearms, naval vessels, and piloted aircraft-all of which risk physical exposure when engaged in direct combat-cyberweapons can be deployed (often without their victims' awareness) under the protection of the anonymity inherent in cyberspace. Furthermore, it is difficult in the cyber domain to determine with accuracy what a malicious actor may target and what type of cyberweapon the actor may wield. These aspects imply an advantage for malicious actors in cyberspace that is greater than for those in any other domain, as the malicious cyberactor, under current international constructs and norms, has the ability to choose the time, place, and weapon of engagement. This being said, if defenders are to successfully repel attempted intrusions, then they must conduct an active cyberdefense within a framework that proactively engages threatening actions independent of a requirement to achieve attribution. This paper proposes that private business, government personnel, and cyberdefenders must develop a threat identification framework that does not depend upon attribution of the malicious actor, i.e., an attribution agnostic cyberdefense construct. Furthermore, upon developing this framework, network defenders must deploy internally based cyberthreat countermeasures that take advantage of defensive network environmental variables and alter the calculus of nefarious individuals in cyberspace. Only by accomplishing these two objectives can the defenders of cyberspace actively combat malicious agents within the virtual realm.

Keywords: security of data; active cyberdefense; anonymity protection; attribution agnostic cyberdefense constructs; cyber domain; cyberdefenders; cyberweapons; government personnel; internally based cyberthreat countermeasures; international constructs; international norms; malicious actor; physical weapons; private business; proactive framework; threat identification framework; Computer security; Cyberspace; Educational institutions; Government; Internet; Law; active defense;attribution agnostic cyberdefense construct; internally based cyberthreat countermeasures   (ID#:15-3979)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916398&isnumber=6916383

 

Dirik, A.E.; Sencar, H.T.; Memon, N., "Analysis of Seam-Carving-Based Anonymization of Images Against PRNU Noise Pattern-Based Source Attribution," Information Forensics and Security, IEEE Transactions on, vol. 9, no.12, pp.2277, 2290, Dec. 2014. doi: 10.1109/TIFS.2014.2361200

Abstract: The availability of sophisticated source attribution techniques raises new concerns about privacy and anonymity of photographers, activists, and human right defenders who need to stay anonymous while spreading their images and videos. Recently, the use of seam-carving, a content-aware resizing method, has been proposed to anonymize the source camera of images against the well-known photoresponse nonuniformity (PRNU)-based source attribution technique. In this paper, we provide an analysis of the seam-carving-based source camera anonymization method by determining the limits of its performance introducing two adversarial models. Our analysis shows that the effectiveness of the deanonymization attacks depend on various factors that include the parameters of the seam-carving method, strength of the PRNU noise pattern of the camera, and an adversary's ability to identify uncarved image blocks in a seam-carved image. Our results show that, for the general case, there should not be many uncarved blocks larger than the size of $50times 50$ pixels for successful anonymization of the source camera.

Keywords: image coding; image denoising; PRNU noise pattern-based source attribution; content-aware resizing method; deanonymization attacks; image anonymization; photoresponse nonuniformity; seam-carving method; seam-carving-based anonymization; source attribution techniques; Cameras; Correlation; Image quality; Noise; Videos; PRNU noise pattern; anonymization; counter-forensics; de-anonymization attacks; seam-carving; source attribution   (ID#:15-3980)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914598&isnumber=6953163

 

Tennyson, M.F.; Mitropoulos, F.J., "Choosing a Profile Length in the SCAP Method of Source Code Authorship Attribution," SOUTHEASTCON 2014, IEEE, pp. 1, 6, 13-16 March 2014. doi: 10.1109/SECON.2014.6950705

Abstract: Source code authorship attribution is the task of determining the author of source code whose author is not explicitly known. One specific method of source code authorship attribution that has been shown to be extremely effective is the SCAP method. This method, however, relies on a parameter L that has heretofore been quite nebulous. In the SCAP method, each candidate author's known work is represented as a profile of that author, where the parameter L defines the profile's maximum length. In this study, alternative approaches for selecting a value for L were investigated. Several alternative approaches were found to perform better than the baseline approach used in the SCAP method. The approach that performed the best was empirically shown to improve the performance from 91.0% to 97.2% measured as a percentage of documents correctly attributed using a data set consisting of 7,231 programs written in Java and C++.

Keywords: C++ language; Java; source code (software); C++ language; Java language; SCAP method; data set; profile length; source code authorship attribution; Frequency control; Frequency measurement; RNA; authorship attribution; information retrieval; plagiarism detection; software forensics   (ID#:15-3981)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950705&isnumber=6950640

 

Shaobu Wang; Shuai Lu; Ning Zhou; Guang Lin; Elizondo, M.; Pai, M.A., "Dynamic-Feature Extraction, Attribution, and Reconstruction (DEAR) Method for Power System Model Reduction," Power Systems, IEEE Transactions on, vol. 29, no.5, pp.2049, 2059, Sept. 2014. doi: 10.1109/TPWRS.2014.2301032

Abstract: In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.

Keywords: IEEE standards; cost reduction; dynamic response; electric generators ;feature extraction; power system dynamic stability; power system interconnection; power system transient stability; reduced order systems; DEAR Method; IEEE standard; characteristic generator state variable; computational cost reduction; dynamic feature extraction, attribution, and reconstruction method; dynamic response measurement; power system interconnection; power system model reduction; quasi-nonlinear reduced model; transient stability; Computational modeling; Feature extraction; Generators; Power system dynamics; Power system stability; Reduced order systems; Rotors; Dynamic response; feature extraction; model reduction; orthogonal decomposition; power systems   (ID#:15-3982)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730699&isnumber=6879345

 

Pratanwanich, N.; Lio, P., "Who Wrote This? Textual Modeling with Authorship Attribution in Big Data," Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, pp.645, 652, 14-14 Dec. 2014. doi: 10.1109/ICDMW.2014.140

Abstract: By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.

Keywords: Big Data; meta data; text analysis; unsupervised learning; AUC; Big Data; Dirichlet distribution; ROC curve; SAT model; area under curve; author-topic model; authorship attribution; authorship learning; authorship prediction; dimension reduction; information discovery; meta-data; multiple-author documents; receiver operating characteristic curve; supervised author-topic model; textual modeling; topic representations; topic-based generative models; unsupervised AT model; unsupervised learning technique; Analytical models; Computational modeling; Data models; Mathematical model; Predictive models; Training; Vectors; Authorship attribution; Bayesian inference; High dimensional textual data; Information discovery; Probabilistic topic models   (ID#:15-3983)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022657&isnumber=7022545

 

Marukatat, R.; Somkiadcharoen, R.; Nalintasnai, R.; Aramboonpong, T., "Authorship Attribution Analysis of Thai Online Messages," Information Science and Applications (ICISA), 2014 International Conference on, pp.1, 4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847369

Abstract: This paper presents a framework to identify the authors of Thai online messages. The identification is based on 53 writing attributes and the selected algorithms are support vector machine (SVM) and C4.5 decision tree. Experimental results indicate that the overall accuracies achieved by the SVM and the C4.5 were 79% and 75%, respectively. This difference was not statistically significant (at 95% confidence interval). As for the performance of identifying individual authors, in some cases the SVM was clearly better than the C4.5. But there were also other cases where both of them could not distinguish one author from another.

Keywords: decision trees; natural language processing; support vector machines; C4.5 decision tree; SVM; Thai online messages; author identification; authorship attribution analysis; support vector machine; writing attributes; Accuracy; Decision trees; Kernel; Support vector machines; Training; Training data; Writing   (ID#:15-3984)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847369&isnumber=6847317

 

Okuno, S.; Asai, H.; Yamana, H., "A Challenge Of Authorship Identification For Ten-Thousand-Scale Microblog Users," Big Data (Big Data), 2014 IEEE International Conference on, pp.52,54, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004491

Abstract: Internet security issues require authorship identification for all kinds of internet contents; however, authorship identification for microblog users is much harder than other documents because microblog texts are too short. Moreover, when the number of candidates becomes large, i.e., big data, it will take long time to identify. Our proposed method solves these problems. The experimental results show that our method successfully identifies the authorship with 53.2% of precision out of 10,000 microblog users in the almost half execution time of previous method.

Keywords: Big Data; security of data; social networking (online);Internet security issues; authorship identification; big data; microblog texts; ten-thousand-scale microblog users; Big data; Blogs; Computers; Distance measurement; Internet; Security; Training; Twitter; authorship attribution; authorship detection; authorship identification; microblog   (ID#:15-3985)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004491&isnumber=7004197

 

Yuying Wang; Xingshe Zhou, "Spatio-Temporal Semantic Enhancements For Event Model Of Cyber-Physical Systems," Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on, pp.813,818, 5-8 Aug. 2014. doi: 10.1109/ICSPCC.2014.6986310

Abstract: The newly emerging cyber-physical systems (CPS) discover events from multiple, distributed sources with multiple levels of detail and heterogeneous data format, which may not be compare and integrate, and turn to hardly combined determination for action. While existing efforts have mainly focused on investigating a uniform CPS event representation with spatio-temporal attributes, in this paper we propose a new event model with two-layer structure, Basic Event Model (BEM) and Extended Information Set (EIS). A BEM could be extended with EIS by semantic adaptor for spatio-temporal and other attribution enhancement. In particular, we define the event process functions, like event attribution extraction and composition determination, for CPS action trigger exploit the Complex Event Process (CEP) engine Esper. Examples show that such event model provides several advantages in terms of extensibility, flexibility and heterogeneous support, and lay the foundations of event-based system design in CPS.

Keywords: embedded systems; programming language semantics; BEM; CEP engine Esper; CPS; CPS event representation; EIS; attribution enhancement; basic event model; complex event process; composition determination; cyber-physical systems; event attribution extraction; event process functions; extended information set; multilevel heterogeneous embedded system; semantic adaptor; spatio-temporal attributes; spatio-temporal semantic enhancements; Adaptation models; Computational modeling; Data models; Observers; Semantics; Sensor phenomena and characterization; Complex Event Process; Cyber-physical systems; event modeling; event semantic; spatio-temporal event   (ID#:15-3986)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6986310&isnumber=6986138

 

Balakrishnan, R.; Parekh, R., "Learning to Predict Subject-Line Opens For Large-Scale Email Marketing," Big Data (Big Data), 2014 IEEE International Conference on, pp.579,584, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004277

Abstract: Billions of dollars of services and goods are sold through email marketing. Subject lines have a strong influence on open rates of the e-mails, as the consumers often open e-mails based on the subject. Traditionally, the e-mail-subject lines are compiled based on the best assessment of the human editors. We propose a method to help the editors by predicting subject line open rates by learning from past subject lines. The method derives different types of features from subject lines based on Keywords, performance of past subject lines and syntax. Furthermore, we evaluate the contribution of individual subject-line Keywords to overall open rates based on an iterative method-namely Attribution Scoring - and use this for improved predictions. A random forest based model is trained to combine these features to predict the performance. We use a dataset of more than a hundred thousand different subject lines with many billions of impressions to train and test the method. The proposed method shows significant improvement in prediction accuracy over the baselines for both new as well as already used subject lines.

 Keywords: electronic mail; learning (artificial intelligence);marketing data processing; attribution scoring iterative method; human editors; large-scale e-mail marketing; open e-mail rates; performance prediction accuracy improvement; random forest based model training; subject line performance; subject line syntax; subject-line Keywords; subject-line open rate prediction learning; Accuracy; Business; Electronic mail; Feature extraction; Postal services; Predictive models; Weight measurement ;deals; email; learning; subject    (ID#:15-3987)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004277&isnumber=7004197

 

Alsaleh, M.N.; Al-Shaer, E.A., "Security Configuration Analytics Using Video Games," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 256, 264, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997493

Abstract: Computing systems today have a large number of security configuration settings that enforce security properties. However, vulnerabilities and incorrect configuration increase the potential for attacks. Provable verification and simulation tools have been introduced to eliminate configuration conflicts and weaknesses, which can increase system robustness against attacks. Most of these tools require special knowledge in formal methods and precise specification for requirements in special languages, in addition to their excessive need for computing resources. Video games have been utilized by researchers to make educational software more attractive and engaging. Publishing these games for crowdsourcing can also stimulate competition between players and increase the game educational value. In this paper we introduce a game interface, called NetMaze, that represents the network configuration verification problem as a video game and allows for attack analysis. We aim to make the security analysis and hardening usable and accurately achievable, using the power of video games and the wisdom of crowdsourcing. Players can easily discover weaknesses in network configuration and investigate new attack scenarios. In addition, the gameplay scenarios can also be used to analyze and learn attack attribution considering human factors. In this paper, we present a provable mapping from the network configuration to 3D game objects.

Keywords: computer games; courseware; formal verification ;human factors; security of data; specification languages; user interfaces; 3D game object; NetMaze; attack analysis; attack attribution; computing systems; configuration conflict; crowdsourcing; educational software; formal methods; game educational value; game interface; gameplay scenario; human factor; network configuration verification problem; provable mapping; provable verification; security analysis; security configuration analytics; security configuration settings; security property; simulation tool; special languages; system robustness; video games; vulnerability; Communication networks; Computational modeling; Conferences; Games; Network topology ;Security; Topology   (ID#:15-3988)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997493&isnumber=6997445

 

Xiong Xu; Yanfei Zhong; Liangpei Zhang, "Adaptive Subpixel Mapping Based on a Multiagent System for Remote-Sensing Imagery," Geoscience and Remote Sensing, IEEE Transactions on, vol. 52, no. 2, pp.787, 804, Feb. 2014. doi: 10.1109/TGRS.2013.2244095

Abstract: The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.

Keywords: geophysical image processing ;image classification; multi-agent systems; neural nets; remote sensing; adaptive subpixel mapping framework; adaptive subpixel mapping technique; artificial images; back-propagation neural network; boundary-mixed pixel; class abundance; class labels; coarser spectrally unmixed fraction images; decision agents; feature detection agent kinds; fine-resolution map; hard classification method; identical mixed pixel type; linear subpixel; mixed pixel problem; mixed pixel structure reconstruction; multiagent subpixel mapping framework; multiagent system; remote-sensing image classification; remote-sensing imagery; soft classification; spatial attraction model; spectral unmixing techniques; subpixel mapping accuracy; subpixel mapping agents; subpixel mapping algorithm performance; subpixel mapping problem; subpixel spatial attribution ;synthetic remote-sensing images; traditional subpixel mapping algorithms; Algorithm design and analysis; Feature extraction; Image reconstruction ;Multi-agent systems; Optimization; Remote sensing; Multiagent system; remote sensing; resolution enhancement; subpixel mapping; super-resolution mapping   (ID#:15-3989)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6479297&isnumber=6680673

 

Liu, J.N.K.; Yanxing Hu; You, J.J.; Yulin He, "An Advancing Investigation On Reduct And Consistency For Decision Tables In Variable Precision Rough Set Models," Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on, pp.1496,1503, 6-11 July 2014. doi: 10.1109/FUZZ-IEEE.2014.6891766

Abstract: Variable Precision Rough Set (VPRS) model is one of the most important extensions of the Classical Rough Set (RS) theory. It employs a majority inclusion relation mechanism in order to make the Classical RS model become more fault tolerant, and therefore the generalization of the model is improved. This paper can be viewed as an extension of previous investigations on attribution reduction problem in VPRS model. In our investigation, we illustrated with examples that the previously proposed reduct definitions may spoil the hidden classification ability of a knowledge system by ignoring certian essential attributes in some circumstances. Consequently, by proposing a new β-consistent notion, we analyze the relationship between the structures of Decision Table (DT) and different definitions of reduct in VPRS model. Then we give a new notion of β-complement reduct that can avoid the defects of reduct notions defined in previous literatures. We also supply the method to obtain the β- complement reduct using a decision table splitting algorithm, and finally demonstrate the feasibility of our approach with sample instances.

Keywords: data integrity; data reduction; decision tables; pattern classification; rough set theory; β-complement reduct;β-consistent notion; VPRS model; attribution reduction problem; classical RS model; classical rough set theory; decision table splitting algorithm; decision table structures; hidden classification ability; majority inclusion relation mechanism; variable precision rough set model; Analytical models; Computational modeling; Educational institutions; Electronic mail; Fault tolerance; Fault tolerant systems; Mathematical model   (ID#:15-3990)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891766&isnumber=6891523

 

Hauger, W.K.; Olivier, M.S., "The Role Of Triggers In Database Forensics," Information Security for South Africa (ISSA), 2014, pp. 1, 7, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950506

Abstract: An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.

Keywords: SQL; digital forensics; meta data; SQL standard; attribution processes; data definition level; database forensics; database trigger analysis; database trigger handling; database triggers; digital forensic analysis methods; forensic interpretation; metadata; Databases; Dictionaries; Forensics; Irrigation; Monitoring; Reliability; database forensics; database triggers; digital forensic analysis; methods; processes   (ID#:15-3991)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950506&isnumber=6950479

 

Jantsch, A.; Tammemae, K., "A Framework Of Awareness For Artificial Subjects," Hardware/Software Codesign and System Synthesis (CODES+ISSS), 2014 International Conference on, pp.1,3, 12-17 Oct. 2014. doi: 10.1145/2656075.2661644

Abstract: We review the concepts of environment-and self-models, semantic interpretation, semantic attribution, history, goals and expectations, prediction, and self-inspection, how they contribute to awareness and self-awareness, and how they contribute to improved robustness and sensibility of behavior. Researchers have for some time realized that a sense of “awareness” of many embedded systems' own situation is a facilitator for robust and dependable behaviour even under radical environmental changes and drastically diminished capabilities. This insight has recently led to a proliferation of work on self-awareness and other system properties such as self-organization, self-configuration, self-optimization, self-protection, self-healing, etc., which are sometimes subsumed under the term “self-*”.

Keywords: artificial intelligence; embedded systems; fault tolerant computing; optimisation; artificial subject awareness; embedded systems; environment model; self-awareness; self-healing; self-model; self-optimization; semantic attribution; semantic interpretation; Educational institutions; Engines; History; Monitoring; Predictive models; Robustness; Semantics   (ID#:15-3992)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6971836&isnumber=6971816

 

Lin Chen; Lu Zhou; Chunxue Liu; Quan Sun; Xiaobo Lu, "Occlusive Vehicle Tracking Via Processing Blocks In Markov Random Field," Progress in Informatics and Computing (PIC), 2014 International Conference on, pp.294,298, 16-18 May 2014. doi: 10.1109/PIC.2014.6972344

Abstract: The technology of vehicle video detecting and tracking has been playing an important role in the ITS (Intelligent Transportation Systems) field during recent years. The occlusion phenomenon among vehicles is one of the most difficult problems related to vehicle tracking. In order to handle occlusion, this paper proposes an effective solution that applied Markov Random Field (MRF) to the traffic images. The contour of the vehicle is firstly detected by using background subtraction, then numbers of blocks with vehicle's texture and motion information are filled inside each vehicle. We extract several kinds of information of each block to process the following tracking. As for each occlusive block two groups of clique functions in MRF model are defined, which represents spatial correlation and motion coherence respectively. By calculating each occlusive block's total energy function, we finally solve the attribution problem of occlusive blocks. The experimental results show that our method can handle occlusion problems effectively and track each vehicle continuously.

Keywords: Markov processes; image motion analysis; image texture; intelligent transportation systems; object detection; object tracking; video signal processing; ITS; MRF model; Markov random field; attribution problem; background subtraction; clique functions; information extraction; intelligent transportation systems; motion coherence; occlusion handling; occlusion phenomenon; occlusive block total energy function; occlusive vehicle tracking; processing blocks; spatial correlation; traffic images; vehicle contour; vehicle motion information; vehicle texture information; vehicle video detection; Image resolution; Markov random fields; Robustness; Tracking; Vectors; Vehicle detection; Vehicles; Markov Random Field (MRF); occlusion; vehicle detection; vehicle tracking   (ID#:15-3993)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972344&isnumber=6972283


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Attribution (2014 Year in Review) Part 2

 

 
SoS Newsletter Logo

Attribution
(2014 Year in Review)
Part 2

 

Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics.  The research presented here addresses a number of issues in each context published in 2014. 

 

Yun Shen; Thonnard, O., "MR-TRIAGE: Scalable Multi-Criteria Clustering For Big Data Security Intelligence Applications," Big Data (Big Data), 2014 IEEE International Conference on; pp.627,635, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004285

Abstract: Security companies have recently realised that mining massive amounts of security data can help generate actionable intelligence and improve their understanding of Internet attacks. In particular, attack attribution and situational understanding are considered critical aspects to effectively deal with emerging, increasingly sophisticated Internet attacks. This requires highly scalable analysis tools to help analysts classify, correlate and prioritise security events, depending on their likely impact and threat level. However, this security data mining process typically involves a considerable amount of features interacting in a non-obvious way, which makes it inherently complex. To deal with this challenge, we introduce MR-TRIAGE, a set of distributed algorithms built on MapReduce that can perform scalable multi-criteria data clustering on large security data sets and identify complex relationships hidden in massive datasets. The MR-TRIAGE workflow is made of a scalable data summarisation, followed by scalable graph clustering algorithms in which we integrate multi-criteria evaluation techniques. Theoretical computational complexity of the proposed parallel algorithms are discussed and analysed. The experimental results demonstrate that the algorithms can scale well and efficiently process large security datasets on commodity hardware. Our approach can effectively cluster any type of security events (e.g., spam emails, spear-phishing attacks, etc) that are sharing at least some commonalities among a number of predefined features.

Keywords: Big Data; computer crime; data mining; graph theory; parallel algorithms; pattern clustering; Big Data security intelligence applications; Internet attacks; MR-TRIAGE workflow; MapReduce; attack attribution; commodity hardware; computational complexity; distributed algorithms; large security data sets; large security datasets; multicriteria evaluation techniques; parallel algorithms; scalable data summarisation; scalable graph clustering algorithms; scalable multicriteria data clustering; security companies; security data mining; security events; situational understanding; threat level; Algorithm design and analysis; Clustering algorithms; Data mining; Electronic mail; Open wireless architecture; Prototypes; Security   (ID#:15-3994)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004285&isnumber=7004197

 

Jing Li; Ming Chen, "On-Road Multiple Obstacles Detection in Dynamical Background," Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2014 Sixth International Conference on, vol. 1, pp. 102, 105, 26-27 Aug. 2014. doi: 10.1109/IHMSC.2014.33

Abstract: Road In this paper, we focus on both the road vehicle and pedestrians detection, namely obstacle detection. At the same time, a new obstacle detection and classification technique in dynamical background is proposed. Obstacle detection is based on inverse perspective mapping and homography. Obstacle classification is based on fuzzy neural network. The estimation of the vanishing point relies on feature extraction strategy, which segments the lane markings of the images by combining a histogram-based segmentation with temporal filtering. Then, the vanishing point of each image is stabilized by means of a temporal filtering along the estimates of previous images. The IPM image is computed based on the stabilized vanishing point. The method exploits the geometrical relations between the elements in the scene so that obstacle can be detected. The estimated homography of the road plane between successive images is used for image alignment. A new fuzzy decision fusion method with fuzzy attribution for obstacle detection and classification application is described. The fuzzy decision function modifies parameters with auto-adapted algorithm to get better classification probability. It is shown that the method can achieve better classification result.

Keywords: fuzzy neural nets; image classification; object detection; pedestrians; IPM image; auto-adapted algorithm; dynamical background; feature extraction strategy; fuzzy attribution; fuzzy decision function; fuzzy decision fusion method; fuzzy neural network; histogram-based segmentation; homography; image alignment; inverse perspective mapping; lane markings; obstacle classification probability; on-road multiple obstacle detection; pedestrians detection; road plane; road vehicle; stabilized vanishing point; temporal filtering; Cameras; Computer vision; Feature extraction; Fuzzy neural networks; Radar; Roads; Vehicles; Inverse perspective mapping; fuzzy neural network; homography; image alignment   (ID#:15-3995)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917316&isnumber=6917283

 

Andŕe, N.S.; Louchet, H.; Habel, K.; Richter, A., "Analytical Formulation for SNR Prediction in DMDD OFDM-Based Access Systems," Photonics Technology Letters, IEEE, vol. 26, no. 12, pp.1255, 1258, June15, 2014. doi: 10.1109/LPT.2014.2320825

Abstract: In multicarrier direct modulation direct detection systems, interaction between laser chirp and fiber group velocity dispersion induces subcarrier-to-subcarrier intermixing interferences (SSII) after detection. Such SSII become a major impairment in orthogonal frequency division multiplexing-based access systems, where a high modulation index, leading to large chirp, is required to maximize the system power budget. In this letter, we present and experimentally verify an analytical formulation to predict the level of signal and SSII and estimate the signal to noise ratio of each subcarrier, enabling improved bit-and-power loading and subcarrier attribution. The reported model is compact, and only requires the knowledge of basic link characteristics and laser parameters that can easily be measured.

Keywords: OFDM modulation; chirp modulation; optical fibre communication; optical fibre dispersion; DMDD OFDM-based access system; SNR prediction; SSII; fiber group velocity dispersion; high modulation index ;improved bit-and-power loading; laser chirp; multicarrier direct modulation direct detection system; orthogonal frequency division multiplexing-based access system; subcarrier attribution; subcarrier-to-subcarrier intermixing interference; Chirp; Frequency modulation; Laser modes; OFDM; Optical fibers; Signal to noise ratio; Chirp; OFDM; chromatic dispersion; intensity modulation; optical fiber communication   (ID#:15-3996)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807719&isnumber=6814330

 

Khosmood, F.; Nico, P.L.; Woolery, J., "User Identification Through Command History Analysis," Computational Intelligence in Cyber Security (CICS), 2014 IEEE Symposium on, pp. 1, 7, 9-12 Dec. 2014. doi: 10.1109/CICYBS.2014.7013363

Abstract: As any veteran of the editor wars can attest, Unix users can be fiercely and irrationally attached to the commands they use and the manner in which they use them. In this work, we investigate the problem of identifying users out of a large set of candidates (25-97) through their command-line histories. Using standard algorithms and feature sets inspired by natural language authorship attribution literature, we demonstrate conclusively that individual users can be identified with a high degree of accuracy through their command-line behavior. Further, we report on the best performing feature combinations, from the many thousands that are possible, both in terms of accuracy and generality. We validate our work by experimenting on three user corpora comprising data gathered over three decades at three distinct locations. These are the Greenberg user profile corpus (168 users), Schonlau masquerading corpus (50 users) and Cal Poly command history corpus (97 users). The first two are well known corpora published in 1991 and 2001 respectively. The last is developed by the authors in a year-long study in 2014 and represents the most recent corpus of its kind. For a 50 user configuration, we find feature sets that can successfully identify users with over 90% accuracy on the Cal Poly, Greenberg and one variant of the Schonlau corpus, and over 87% on the other Schonlau variant.

Keywords: Unix; information analysis; learning (artificial intelligence);natural language processing; Cal Poly command history corpus; Schonlau corpus; Schonlau masquerading corpus; Schonlau variant; Unix user; command history analysis; command-line behavior; command-line history; editor war; feature set; natural language authorship attribution literature; standard algorithm; user configuration; user corpora; user identification ;user profile corpus; Accuracy; Computer science; Decision trees; Entropy; Feature extraction; History; Semantics   (ID#:15-3997)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013363&isnumber=7013356

 

Skoberne, N.; Maennel, O.; Phillips, I.; Bush, R.; Zorz, J.; Ciglaric, M., "IPv4 Address Sharing Mechanism Classification and Tradeoff Analysis," Networking, IEEE/ACM Transactions on, vol. 22, no.2, pp.391,404, April 2014. doi: 10.1109/TNET.2013.2256147

Abstract: The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.

Keywords: IP networks; Internet; DS-Lite; IANA-level IPv4 address exhaustion; IPv4 address sharing mechanism classification;IPv4 reachability; IPv6 deployment;IPv6-only Internet; ISP-level address sharing mechanisms; Internet service providers; NAT444; abuse attribution; address efficiency; direct intercustomer communication; performance degradation; port usage efficiency; Address family translation; IPv4 address sharing;IPv6 transition; address plus port (A+P);carrier grade NAT (CGN); network address translation (NAT)  (ID#:15-3998)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6504560&isnumber=6799946

 

Pandey, A.K.; Agrawal, C.P., "Analytical Network Process based Model To Estimate The Quality Of Software Components," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.678,682, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781361

Abstract: Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.

Keywords: analytic hierarchy process; decision theory; object-oriented programming; software quality; ANP based model; analytical network process based model; decision parameter attribution; decision problem;  dependency networks; final software application system; software component quality estimation; software quality numeric value estimation; software units; Interoperability; Measurement; Software reliability; Stability analysis; Usability; ANP; Software component; prioritization and software application; quality   (ID#:15-3998)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781361&isnumber=6781240

 

Biswas, A.R.; Giaffreda, R., "IoT and Cloud Convergence: Opportunities And Challenges," Internet of Things (WF-IoT), 2014 IEEE World Forum on, pp.375, 376, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803194

Abstract: The success of the IoT world requires service provision attributed with ubiquity, reliability, high-performance, efficiency, and scalability. In order to accomplish this attribution, future business and research vision is to merge the Cloud Computing and IoT concepts, i.e., enable an “Everything as a Service” model: specifically, a Cloud ecosystem, encompassing novel functionality and cognitive-IoT capabilities, will be provided. Hence the paper will describe an innovative IoT centric Cloud smart infrastructure addressing individual IoT and Cloud Computing challenges.

Keywords: Internet of Things; cloud computing; Internet of Things; IoT centric cloud smart infrastructure; cloud computing; cloud convergence; cloud ecosystem; cognitive-IoT capabilities; everything as a service model; Cloud computing; Convergence; Data handling; Data storage systems ;Information management; Reliability; Cloud Computing; Convergence; Internet of Things   (ID#:15-3999)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803194&isnumber=6803102

 

Watney, M., "Challenges Pertaining To Cyber War Under International Law," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp. 1,5, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913962

Abstract: State-level intrusion in the cyberspace of another country seriously threatens a state's peace and security. Consequently many types of cyberspace intrusion are being referred to as cyber war with scant regard to the legal position under international law. This is but one of the challenges facing state-level cyber intrusion. The current rules of international law prohibit certain types of intrusion. However, international law does not define which intrusion fall within the prohibited category of intrusion nor when the threshold of intrusion is surpassed. International lawyers have to determine the type of intrusion and threshold on a case-by-case basis. The Tallinn Manual may serve as guideline in this assessment, but determination of the type of intrusion and attribution to a specific state is not easily established. The current rules of international law do not prohibit all intrusion which on statelevel may be highly invasive and destructive. Unrestrained cyber intrusion may result in cyberspace becoming a battle space in which state(s) with strong cyber abilities dominate cyberspace resulting in resentment and fear among other states. The latter may be prevented on an international level by involving all states on an equal and transparent manner in cyberspace governance.

Keywords: law; security of data; Tallinn Manual; cyber war; cyberspace governance; cyberspace intrusion; international law; legal position; state-level cyber intrusion; Computer crime; Cyberspace; Force; Law; Manuals; Cyber war; Estonia; Stuxnet; challenges; cyberspace governance; cyberspace state-level intrusion; international law   (ID#:15-4000)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913962&isnumber=6913961

 

Honghui Dong; Xiaoqing Ding; Mingchao Wu; Yan Shi; Limin Jia; Yong Qin; Lianyu Chu, "Urban Traffic Commuting Analysis Based On Mobile Phone Data," Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on, pp. 611, 616, 8-11 Oct. 2014. doi: 10.1109/ITSC.2014.6957757

Abstract: With the urban traffic planning and management development, it is a highly considerable issue to analyze and estimate the original-destination data in the city. Traditional method to acquire the OD information usually uses household survey, which is inefficient and expensive. In this paper, the new methodology proposed that using mobile phone data to analyze the mechanism of trip generation, trip attraction and the OD information. The mobile phone data acquisition is introduced. A pilot study is implemented on Beijing by using the new method. And, much important traffic information can be extracted from the mobile phone data. We use the K-means clustering algorithm to divide the traffic zone. The attribution of traffic zone is identified using the mobile phone data. Then the OD distribution and the commuting travel are analyzed. At last, an experiment is done to verify availability of the mobile phone data, that analyzing the "Traffic tide phenomenon" in Beijing. The results of the experiments in this paper show a great correspondence to the actual situation. The validated results reveal the mobile phone data has tremendous potential on OD analysis.

Keywords: data acquisition; feature extraction; mobile computing; pattern clustering; traffic information systems; OD information; k-means clustering algorithm; mobile phone data acquisition; traffic information extraction ;trip attraction ;trip generation mechanism; urban traffic commuting analysis; Base stations; Cities and towns; Mobile communication; Mobile handsets; Real-time systems; Sociology; Statistics    (ID#:15-4001)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957757&isnumber=6957655

 

Wenqun Xiu; Xiaoming Li, "The Design of Cybercrime Spatial Analysis System," Information Science and Technology (ICIST), 2014 4th IEEE International Conference on, pp. 132, 135, 26-28 April 2014. doi: 10.1109/ICIST.2014.6920348

Abstract: Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.

Keywords: Internet; geographic information systems; monitoring; security of data; GIS data-structure; IP tracking technology; Internet searching; correlative analysis model; cybercrime monitoring design; cybercrime prevention strategy; geographic information systems; spatial analysis system; Analytical models; Computer crime; Data models; Geographic information systems; IP networks; Internet; Spatial databases; Cybercrime; GIS; Spatial analysis   (ID#:15-4002)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6920348&isnumber=6920317

 

Bou-Harb, E.; Debbabi, M.; Assi, C., "Cyber Scanning: A Comprehensive Survey," Communications Surveys & Tutorials, IEEE, vol. 16, no.3, pp.1496, 1519, Third Quarter 2014. doi: 10.1109/SURV.2013.102913.00020

Abstract: Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.

Keywords: Internet; security of data; Internet wide services; cyber scanning technique; distributed cyber scanning detection method; enterprise networks; targeted cyber attack; Cyberspace; Internet; Monitoring; Ports (Computers); Probes; Protocols; Servers; Cyber scanning; Network reconnaissance; Probing; Probing campaigns; Scanning events   (ID#:15-4003)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657498&isnumber=6880447

 

Caso, J.S., "The Rules Of Engagement for Cyber-Warfare and the Tallinn Manual: A Case

Study," Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on, pp. 252, 257, 4-7 June 2014. doi: 10.1109/CYBER.2014.6917470

Abstract: Documents such as the Geneva (1949) and Hague Conventions (1899 and 1907) that have clearly outlined the rules of engagement for warfare find themselves challenged by the presence of a new arena: cyber. Considering the potential nature of these offenses, operations taking place in the realm of cyber cannot simply be generalized as “cyber-warfare,” as they may also be acts of cyber-espionage, cyber-terrorism, cyber-sabaotge, etc. Cyber-attacks, such as those on Estonia in 2007, have begun to test the limits of NATO's Article 5 and the UN Charter's Article 2(4) against the use of force. What defines “force” as it relates to cyber, and what kind of response is merited in the case of uncertainty regarding attribution? In 2009, NATO's Cooperative Cyber Defence Centre of Excellence commissioned a group of experts to publish a study on the application of international law to cyber-warfare. This document, the Tallinn Manual, was published in 2013 as a non-binding exercise to stimulate discussion on the codification of international law on the subject. After analysis, this paper concludes that the Tallinn Manual classifies the 2010 Stuxnet attack on Iran's nuclear program as an illegal act of force. The purpose of this paper is the following: (1) to analyze the historical and technical background of cyber-warfare, (2) to evaluate the Tallinn Manual as it relates to the justification cyber-warfare, and (3) to examine the applicability of the Tallinn Manual in a case study of a historical example of a cyber-attacks.

Keywords: law; security of data; Cooperative Cyber Defence Centre of Excellence; Tallinn manual;cyber-attacks;cyber-espionage;cyber-sabaotge;cyber-terrorism; cyber-warfare; international law; Computer crime; Computers; Force; Manuals; Organizations;  Protocols; Standards   (ID#:15-4004)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917470&isnumber=6917419

 

Barbosa de Carvalho, M.; Pereira Esteves, R.; da Cunha Rodrigues, G.; Cassales Marquezan, C.; Zambenedetti Granville, L.; Rockenbach Tarouco, L.M., "Efficient Configuration Of Monitoring Slices For Cloud Platform Administrators," Computers and Communication (ISCC), 2014 IEEE Symposium on,  pp. 1, 7, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912568

Abstract: Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.

Keywords: cloud computing; system monitoring; FlexACMS response time; IaaS; automation techniques; cloud computing; cloud environments; cloud slices; infrastructure-as-a-service; load balancing; monitoring server load; Indium phosphide; Measurement; Monitoring; Scalability; Servers; Time factors; Web services; Cloud computing; monitoring configuration}   (ID#:15-4005)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912568&isnumber=6912451

 

Boukhtouta, A.; Lakhdari, N.-E.; Debbabi, M., "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 5, March 30 2014-April 2 2014.doi: 10.1109/NTMS.2014.6814026

Abstract: The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Keywords: computer network security; invasive software; learning (artificial intelligence); application protocol sequences signature; cyber-threats; machine learning algorithm; malicious packets detection; malware automatic dynamic analysis; malware traffic detection; network traffic; Cryptography; Databases; Engines; Feeds; Malware; Protocols   (ID#:15-4006)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963

 

Gimenez, A.; Gamblin, T.; Rountree, B.; Bhatele, A.; Jusufi, I.; Bremer, P.-T.; Hamann, B., "Dissecting On-Node Memory Access Performance: A Semantic Approach," High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, pp.166,176, 16-21 Nov. 2014. doi: 10.1109/SC.2014.19

Abstract: Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.

Keywords: distributed memory systems; multi-threading; storage management; CPU manufacturers; PMU ;attribute semantic information; code regions; data motion; data objects; design decisions; distributed memory systems; domain decomposition; fine-grained memory access performance data; memory access optimization; memory behaviour; multithreading; on-node memory access performance; performance ramifications; power efficiency; sampled memory accesses; sampling-based performance measurement units; semantic approach; Context; Hardware; Kernel; Libraries; Program processors; Semantics; Topology   (ID#:15-4007)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013001&isnumber=7012182

 

Hessami, A., "A Framework For Characterisation Of Complex Systems And System of Systems," World Automation Congress (WAC), 2014, vol., no., pp.346,354, 3-7 Aug. 2014. doi: 10.1109/WAC.2014.6935936

Abstract: The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.

Keywords: Unified Modeling Language; ontologies (artificial intelligence);programming ;complex systems characterisation; emergence notion; formal development; ontologies; quantitative characterization; system-of-systems characterisation; Aggregates ;Collaboration; Complexity theory; Indexes; Measurement; Rail transportation; Systems engineering and theory; Complexity; Metrics; Ontology; System of Systems; Systems   (ID#:15-4008)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6935936&isnumber=6935633


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Authentication and Authorization (2014 Year in Review) Part 1

 

 
SoS Newsletter Logo

Authentication & Authorization
(2014 Year in Review)
Part 1

 

Authorization and authentication are cornerstones of computer security. As systems become larger, faster and more complex, authorization and authentication methods and protocols are proving to have limits and challenges. The research cited here explores new methods and techniques for improving security in cloud environments, efficient cryptographic computations, and exascale storage systems.  The work presented here was published in 2014. 

 

Kreutz, D.; Bessani, A.; Feitosa, E.; Cunha, H., "Towards Secure and Dependable Authentication and Authorization Infrastructures," Dependable Computing (PRDC), 2014 IEEE 20th Pacific Rim International Symposium on, pp. 43, 52, 18-21 Nov. 2014. doi: 10.1109/PRDC.2014.14

Abstract: We propose a resilience architecture for improving the security and dependability of authentication and authorization infrastructures, in particular the ones based on RADIUS and OpenID. This architecture employs intrusion-tolerant replication, trusted components and entrusted gateways to provide survivable services ensuring compatibility with standard protocols. The architecture was instantiated in two prototypes, one implementing RADIUS and another implementing OpenID. These prototypes were evaluated in fault-free executions, under faults, under attack, and in diverse computing environments. The results show that, beyond being more secure and dependable, our prototypes are capable of achieving the performance requirements of enterprise environments, such as IT infrastructures with more than 400k users.

 Keywords: authorisation; software fault tolerance; IT infrastructures; OpenID; RADIUS; authentication dependability; authentication infrastructures; authentication security; authorization infrastructures; diverse computing environments; enterprise environments; fault-free executions; intrusion-tolerant replication; resilience architecture; trusted components; untrusted gateways; Authentication; Logic gates; Protocols; Public key; Servers; OpenID; RADIUS; authentication and authorization services; dependability; intrusion tolerance; security   (ID#:15-4045)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974750&isnumber=6974735

 

Hummen, R.; Shafagh, H.; Raza, S.; Voig, T.; Wehrle, K., "Delegation-based Authentication and Authorization for the IP-based Internet of Things," Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on, pp. 284, 292, June 30 2014-July 3 2014. doi: 10.1109/SAHCN.2014.6990364

Abstract: IP technology for resource-constrained devices enables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agreement purposes. These overheads particularly hamper secure communication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the constrained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 %, computations by 97 %, network transmissions by 68 %.

Keywords: IP networks; Internet of Things; cryptographic protocols; public key cryptography; DTLS connection; DTLS protocol; IP security protocols; IP-based Internet of Things; authorization functionality; delegation server; delegation-based authentication; key agreement purposes; memory-constrained devices ;peer authentication; public-key cryptography; Context; Protocols; Public key cryptography; Random access memory; Servers   (ID#:15-4046)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990364&isnumber=6990316

 

Durmus, Y.; Langendoen, K., "Wifi Authentication Through Social Networks — A Decentralized And Context-Aware Approach," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on, pp. 532, 538, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815263

Abstract: With the proliferation of WiFi-enabled devices, people expect to be able to use them everywhere, be it at work, while commuting, or when visiting friends. In the latter case, home owners are confronted with the burden of controlling the access to their WiFi router, and usually resort to simply sharing the password. Although convenient, this solution breaches basic security principles, and puts the burden on the friends who have to enter the password in each and every of their devices. The use of social networks, specifying the trust relations between people and devices, provides for a more secure and more friendly authentication mechanism. In this paper, we progress the state-of-the-art by abandoning the centralized solution to embed social networks in WiFi authentication; we introduce EAP-SocTLS, a decentralized approach for authentication and authorization of WiFi access points and other devices, exploiting the embedded trust relations. In particular, we address the (quadratic) search complexity when indirect trust relations, like the smartphone of a friend's kid, are involved. We show that the simple heuristic of limiting the search to friends and devices in physical proximity makes for a scalable solution. Our prototype implementation, which is based on WebID and EAP-TLS, uses WiFi probe requests to determine the pool of neighboring devices and was shown to reduce the search time from 1 minute for the naive policy down to 11 seconds in the case of granting access over an indirect friend.

 Keywords: authorisation; message authentication; search problems; social networking (online);telecommunication security; trusted computing; ubiquitous computing; wireless LAN; EAP-SocTLS; EAP-TLS; WebID; WiFi authentication; WiFi router; WiFi-enabled devices; authentication mechanism; authorization; context-aware approach; decentralized approach; embedded trust relations; heuristic; password; physical proximity; quadratic search complexity; search time reduction; security principles; smartphone ;social networks; Authentication; Authorization; IEEE 802.11 Standards; Probes; Protocols; Servers; Social network services; EAP-SocTLS; EAP-TLS; Social Devices; WebID; WiFi Authentication and Authorization   (ID#:15-4047)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815263&isnumber=6815123

 

Ben Ameur, S.; Zarai, F.; Smaoui, S.; Obaidat, M.S.; Hsiao, K.F., "A Lightweight Mutual Authentication Mechanism For Improving Fast PMIPV6-Based Network Mobility Scheme," Network Infrastructure and Digital Content (IC-NIDC), 2014 4th IEEE International Conference on, pp.61,68, 19-21 Sept. 2014. doi: 10.1109/ICNIDC.2014.7000266

Abstract: In the last decade, the request for Internet access in heterogeneous environments keeps on growing, principally in mobile platforms such as buses, airplanes and trains. Consequently, several extensions and schemes have been introduced to achieve seamless handoff of mobile networks from one subnet to another. Even with these enhancements, the problem of maintaining the security concerns and availability has not been resolved yet, especially, the absence of authentication mechanism between network entities in order to avoid vulnerability from attacks. To eliminate the threats on the interface between the mobile access gateway (MAG) and the mobile router (MR) in improving fast PMIPv6-based network mobility (IFP-NEMO) protocol, we propose a lightweight mutual authentication mechanism in improving fast PMIPv6-based network mobility scheme (LMAIFPNEMO). This scheme uses authentication, authorization and accounting (AAA) servers to enhance the security of the protocol IFP-NEMO which allows the integration of improved fast proxy mobile IPv6 (PMIPv6) in network mobility (NEMO). We use only symmetric cryptographic, generated nonces and hash operation primitives to ensure a secure authentication procedure. Then, we analyze the security aspect of the proposed scheme and evaluate it using the automated validation of internet security protocols and applications (AVISPA) software which has proved that authentication goals are achieved.

Keywords: mobility management (mobile radio);protocols; telecommunication security; AAA servers; AVISPA software; IFP-NEMO protocol; Internet access; LMAIFPNEMO; MAG; MR; NEMO; PMIPV6 based network mobility scheme; authentication authorization and accounting ;automated validation of internet security protocols and applications; lightweight mutual authentication mechanism; mobile access gateway; mobile platforms; mobile router; network mobility; secure authentication procedure; Authentication; Handover; Mobile communication; Mobile computing; Protocols; AVISPA; authentication; network mobility; proxy mobile IPv6; security   (ID#:15-4048)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000266&isnumber=7000253

 

Hyun-Suk Chai; Jun-Dong Cho; Jongpil Jeong, "On Security-Effective and Global Mobility Management for FPMIPv6 Networks," Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2014 Eighth International Conference on, pp. 247, 253, 2-4 July 2014. doi: 10.1109/IMIS.2014.91

Abstract: In PMIPv6-based network, mobile nodes can be made smaller and lighter because the network nodes perform the mobility management-related functions on behalf of the mobile nodes. One of the protocols, Fast Handovers for Proxy Mobile IPv6 (FPMIPv6) [1] was studied by the Internet Engineering Task Force (IETF). Since FPMIPv6 adopts the entities and the concepts of Fast Handovers for Mobile IPv6 (FMIPv6) in Proxy Mobile IPv6 (PMIPv6), it reduces the packet loss. The conventional scheme has been proposed to cooperate with an Authentication, Authorization and Accounting (AAA) infrastructure for authentication of a mobile node in PMIPv6. Despite the fact that this approach resulted in the best efficiency, without beginning secured signaling messages, The PMIPv6 is vulnerable to various security threats and it does not support global mobility. In this paper, the authors analyzed the Kang-Park & ESS-FH scheme, and proposed an Enhanced Security scheme for FPMIPv6 (ESS-FP). Based on the CGA method and the public key Cryptography, ESS-FP provides a strong key exchange and key independence in addition to improving the weaknesses of FPMIPv6 and its handover latency was analyzed and compared with that of the Kang-Park scheme & ESS-FH.

Keywords: cryptographic protocols; mobility management (mobile radio);public key cryptography; CGA method;FPMIPv6 networks; IETF; Internet Engineering Task Force; Kang-Park-ESS-FH scheme; authentication-authorization-accounting infrastructure; enhanced security scheme; fast handover-proxy mobile IPv6;global mobility management; handover latency; mobile node authentication; network node; packet loss reduction; protocols; public key cryptography; security threats; security-effective mobility management; Authentication; Handover; Manganese; Public key cryptography; AAA; CGA; ESS-FP; FPMIPv6; Security Analysis   (ID#:15-4049)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975471&isnumber=6975399

 

Memon, A.S.; Jensen, J.; Cernivec, A.; Benedyczak, K.; Riedel, M., "Federated Authentication and Credential Translation in the EUDAT Collaborative Data Infrastructure," Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th International Conference on, pp. 726, 731, 8-11 Dec. 2014. doi: 10.1109/UCC.2014.118

Abstract: One of the challenges in a distributed data infrastructure is how users authenticate to the infrastructure, and how their authorisations are tracked. Each user community comes with its own established practices, all different, and users are put off if they need to use new, difficult tools. From the perspective of the infrastructure project, the level of assurance must be high enough, and it should not be necessary to reimplement an authentication and authorisation infrastructure (AAI). In the EUDAT project, we chose to implement a mostly loosely coupled approach based on the outcome of the Contrail and Unicore projects. We have preferred a practical approach, combining the outcome of several projects who have contributed parts of the puzzle. The present paper aims to describe the experiences with the integration of these parts. Eventually, we aim to have a full framework which will enable us to easily integrate new user communities and new services.

Keywords: authorisation; groupware; AAI; Contrail project; EUDAT collaborative data infrastructure; Unicore project; authentication and authorisation infrastructure; credential translation; distributed data infrastructure; federated authentication; Authentication; Authorization; Bridges; Communities; Portals; Servers; EUDAT; OAuth; Open ID; PKI; SAML; federated identity management   (ID#:15-4050)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027585&isnumber=7027326

 

Toseef, U.; Zaalouk, A.; Rothe, T.; Broadbent, M.; Pentikousis, K., "C-BAS: Certificate-Based AAA for SDN Experimental Facilities," Software Defined Networks (EWSDN), 2014 Third European Workshop on,  pp.91,96, 1-3 Sept. 2014. doi: 10.1109/EWSDN.2014.41

Abstract: Efficient authentication, authorization, and accounting (AAA) management mechanisms will be key for the widespread adoption of SDN experimentation facilities beyond the confines of academic labs. In particular, we are interested in a robust AAA infrastructure to identify experimenters, police their actions based on the associated roles, facilitate secure resource sharing, and provide for detailed accountability. Currently, however, said facilities are forced to employ a patchy AAA infrastructure which lacks several of the aforementioned features. This paper proposes a certificate-based AAA architecture for SDN experimental facilities, which is by design both secure and flexible. As this work is implementation-driven and aims for a short deployment cycle in current facilities, we also outline a credible migration path which we are currently pursuing actively.

Keywords: authorisation; computer network management; software defined networking; C-BAS; SDN experimentation facilities; authentication authorization and accounting management mechanisms; certificate-based AAA architecture; patchy AAA infrastructure; robust AAA infrastructure; Aggregates; Authentication; Authorization; Computer architecture; Databases; Public key; Servers   (ID#:15-4051)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984058&isnumber=6984033

 

Sah, S.K.; Shakya, S.; Dhungana, H., "A Security Management For Cloud Based Applications And Services with Diameter-AAA," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.6,11, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781243

Abstract: The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.

Keywords: authorisation; cloud computing; Internet; Web based applications; authentication, authorization and accounting; cloud based applications; cloud based services; cloud computing; cloud service providers; cloud system architecture; diameter AAA mechanisms; security management; Authentication; Availability; Browsers; Computational modeling; Protocols; Servers; Cloud Computing; Cloud Security; Diameter-AAA   (ID#:15-4052)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781243&isnumber=6781240

 

Toukabri, T.; Said, A.M.; Abd-Elrahman, E.; Afifi, H., "Cellular Vehicular Networks (CVN): ProSe-Based ITS in Advanced 4G Networks," Mobile Ad Hoc and Sensor Systems (MASS), 2014 IEEE 11th International Conference on, pp. 527, 528, 28-30 Oct. 2014. doi: 10.1109/MASS.2014.100

Abstract: LTE-based Device-to-Device (D2D) communications have been envisioned as a new key feature for short range wireless communications in advanced and beyond 4G networks. We propose in this work to exploit this novel concept of D2D as a new alternative for Intelligent Transportation Systems (ITS) Vehicle-to-Vehicle/Infrastructure (V2X) communications in next generation cellular networks. A 3GPP standard architecture has been recently defined to support Proximity Services (ProSe) in the LTE core network. Taking into account the limitations of this latter and the requirements of ITS services and V2X communications, we propose the CVN solution as an enhancement to the ProSe architecture in order to support hyper-local ITS services. CVN provides a reliable and scalable LTE-assisted opportunistic model for V2X communications through a distributed ProSe architecture. Using a hybrid clustering approach, vehicles are organized into dynamic clusters that are formed and managed by ProSe Cluster Heads which are elected centrally by the CVN core network. ITS services are deemed as Proximity Services and benefit from the basic ProSe discovery, authorization and authentication mechanisms. The CVN solution enhances V2V communication delays and overhead by reducing the need for multi-hop geo-routing. Preliminary simulation results show that the CVN solution provides short setup times and improves ITS communication delays.

Keywords: 4G mobile communication; cellular radio; intelligent transportation systems; CVN; CVN core network;D2D communications; ITS; LTE based device-to-device; LTE core network; ProSe; V2X communications; advanced 4G networks; cellular vehicular networks; distributed ProSe architecture; dynamic clusters; intelligent transportation systems; next generation cellular networks; proximity services; vehicle-to-vehicle/infrastructure; Authorization; Clustering algorithms; Delays; Logic gates; Protocols; Radio access networks;Vehicles;D2D; ITS; LTE; ProSe; clustering   (ID#:15-4053)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035735&isnumber=7035647

 

Patil, A.; Pandit, R.; Patel, S., "Implementation of Security Framework For Multiple Web Applications," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1, 7, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921787

Abstract: Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.

Keywords: Web services; security of data; Web applications; Web services; authentication server; credential problems; identity management technique; security framework implementation single sign-on; Authentication; Authorization; Computers; Encryption; Informatics; Servers; Identity Management System; MD5; OpenID; proxy signature; single sign-on   (ID#:15-4054)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921787&isnumber=6921705

 

Friedman, A.; Hu, V.C., " Attribute Assurance For Attribute Based Access Control," IT Professional Conference (IT Pro), 2014 , vol., no., pp.1,3, 22-22 May 2014. doi: 10.1109/ITPRO.2014.7029296

Abstract: In recent years, Attribute Based Access Control (ABAC) has evolved as the preferred logical access control methodology in the Department of Defense and Intelligence Community, as well as many other agencies across the federal government. Gartner recently predicted that “by 2020, 70% of enterprises will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less that 5% today.” A definition and introduction to ABAC can be found in NIST Special Publication 800-162, Guide to Attribute Based Access Control (ABAC) Definition and Considerations and Intelligence Community Policy Guidance (ICPG) 500.2, Attribute-Based Authorization and Access Management. Within ABAC, attributes are used to make critical access control decisions, yet standards for attribute assurance have just started to be researched and documented. This presentation outlines factors influencing attributes that an authoritative body must address when standardizing attribute assurance and proposes some notional implementation suggestions for consideration. Attribute Assurance brings a level of confidence to attributes that is similar to levels of assurance for authentication (e.g., guidelines specified in NIST SP 800-63 and OMB M-04-04). There are three principal areas of interest when considering factors related to Attribute Assurance. Accuracy establishes the policy and technical underpinnings for semantically and syntactically correct descriptions of Subjects, Objects, or Environmental conditions. Interoperability considers different standards and protocols used for secure sharing of attributes between systems in order to avoid compromising the integrity and confidentiality of the attributes or exposing vulnerabilities in provider or relying systems or entities. Availability ensures that the update and retrieval of attributes satisfy the application to which the ABAC system is applied. In addition, the security and backup capability of attr- bute repositories need to be considered. Similar to a Level of Assurance (LOA), a Level of Attribute Assurance (LOAA) assures a relying party that the attribute value received from an Attribute Provider (AP) is accurately associated with the subject, resource, or environmental condition to which it applies. An Attribute Provider (AP) is any person or system that provides subject, object (or resource), or environmental attributes to relying parties regardless of transmission method. The AP may be the original, authoritative source (e.g., an Applicant). The AP may also receive information from an authoritative source for repacking or store-and-forward (e.g., an employee database) to relying parties or they may derive the attributes from formulas (e.g., a credit score). Regardless of the source of the AP's attributes, the same standards should apply to determining the LOAA. As ABAC is implemented throughout government, attribute assurance will be a critical, limiting factor in its acceptance. With this presentation, we hope to encourage dialog between attribute relying parties, attribute providers, and federal agencies that will be defining standards for ABAC in the immediate future.

Keywords: authorisation; open systems; ABAC; AP; Department of Defense and Intelligence Community; ICPG; Intelligence Community Policy Guidance; LOAA; access management; attribute assurance; attribute based access control; attribute confidentiality; attribute integrity; attribute provider; attribute repositories; attribute retrieval; attribute update; attribute-based authorization; critical assets protection; environmental attributes; interoperability; level of attribute assurance; object attributes; subject attributes; Access control; Communities; Educational institutions; NIST; National security   (ID#:15-4055)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029296&isnumber=7029273

 

Fatemi Moghaddam, F.; Varnosfaderani, S.D.; Mobedi, S.; Ghavam, I.; Khaleghparast, R., "GD2SA: Geo Detection And Digital Signature Authorization For Secure Accessing To Cloud Computing Environments," Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on, pp. 39, 42, 7-8 April 2014. doi: 10.1109/ISCAIE.2014.7010206

Abstract: Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.

Keywords: authorisation; cloud computing; digital signatures; virtualisation; GD2SA; Internet; cloud computing; digital signature authorization; geo detection; secure access; user authentication; virtualization; Authentication; Authorization; Cloud computing; Computational modeling; Digital signatures; Reliability; Cloud Computing; Geo-Detection; Second Verification; Security; User Authentication   (ID#:15-4056)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7010206&isnumber=7010190

 

Balamurugan, B; Krishna, P.Venkata; Ninnala Devi, M; Meenakshi, R; Ahinaya, V, "Enhanced Framework For Verifying User Authorization And Data Correctness Using Token Management System In The Cloud," Circuit, Power and Computing Technologies (ICCPCT), 2014 International Conference on, pp. 1443, 1447, 20-21 March 2014. doi: 10.1109/ICCPCT.2014.7054925

Abstract: Cloud computing is an application and set of services given through the internet. However it is an emerging technology for shared infrastructure but it lacks with an access rights and security mechanism. As it lacks security issues for the cloud users our system focuses only on the security provided through the token management system. It is based on the internet where computing is done through the virtual shared servers for providing infrastructure, software, platform and security as a services. In which security plays an important role in the cloud service. Hence, this security has been given with three types of services such as mutual authentication, directory services, token granting for the resources. Since, existing token issuing mechanism does not provide scalability to large data sets and also increases memory overhead between the client and the server. Hence, our proposed work focuses on providing tokens to the users, which addresses the problem of scalability and memory overhead. The proposed framework of token management system monitors the entire operations of the cloud and there by managing the entire cloud infrastructure. Our model comes under the new category of cloud model known as "Security as a Service". This paper provides the security framework as an architectural model to verify user authorization and data correctness of the resource stored thereby provides guarantee to the data owner for their resource stored into the cloud This framework also describes about the storage of token in a secured manner and it also facilitates search and usage of tokens for auditing purpose and supervision of the users.

Keywords: Authentication; Cloud computing; Computers; Databases; Educational institutions; Servers; Access control; Token Management System   (ID#:15-4057)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054925&isnumber=7054731

 

Cherkaoui, A.; Bossuet, L.; Seitz, L.; Selander, G.; Borgaonkar, R., "New Paradigms For Access Control In Constrained Environments," Reconfigurable and Communication-Centric Systems-on-Chip (ReCoSoC), 2014 9th International Symposium on, pp. 1, 4, 26-28 May 2014. doi: 10.1109/ReCoSoC.2014.6861362

Abstract: The Internet of Things (IoT) is here, more than 10 billion units are already connected and five times more devices are expected to be deployed in the next five years. Technological standarization and the management and fostering of rapid innovation by governments are among the main challenges of the IoT. However, security and privacy are the key to make the IoT reliable and trusted. Security mechanisms for the IoT should provide features such as scalability, interoperability and lightness. This paper addresses authentication and access control in the frame of the IoT. It presents Physical Unclonable Functions (PUF), which can provide cheap, secure, tamper-proof secret keys to authentify constrained M2M devices. To be successfully used in the IoT context, this technology needs to be embedded in a standardized identity and access management framework. On the other hand, Embedded Subscriber Identity Module (eSIM) can provide cellular connectivity with scalability, interoperability and standard compliant security protocols. The paper discusses an authorization scheme for a constrained resource server taking advantage of PUF and eSIM features. Concrete IoT uses cases are discussed (SCADA and building automation).

Keywords: Internet of Things; authorisation; message authentication; mobile computing; open systems; private key cryptography; Internet of Things; IoT; PUF; SCADA; access control; access management framework; authentication; authorization scheme; building automation; cellular connectivity; constrained M2M devices; constrained resource server; eSIM; embedded subscriber identity module; identity management framework; interoperability; physical unclonable functions; standard compliant security protocols; tamper-proof secret keys; Authentication; Authorization; Field programmable gate arrays; Oscillators; Reliability; Servers   (ID#:15-4058)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861362&isnumber=6860682

 

Gerdes, S.; Bergmann, O.; Bormann, C., "Delegated Authenticated Authorization for Constrained Environments," Network Protocols (ICNP), 2014 IEEE 22nd International Conference on, pp. 654, 659, 21-24 Oct. 2014. doi: 10.1109/ICNP.2014.104

Abstract: Smart objects are small devices with limited system resources, typically made to fulfill a single simple task. By connecting smart objects and thus forming an Internet of Things, the devices can interact with each other and their users and support a new range of applications. Due to the limitations of smart objects, common security mechanisms are not easily applicable. Small message sizes and the lack of processing power severely limit the devices' ability to perform cryptographic operations. This paper introduces a protocol for delegating client authentication and authorization in a constrained environment. The protocol describes how to establish a secure channel based on symmetric cryptography between resource-constrained nodes in a cross-domain setting. A resource-constrained node can use this protocol to delegate authentication of communication peers and management of authorization information to a trusted host with less severe limitations regarding processing power and memory.

Keywords: Internet of Things; cryptographic protocols; Internet of Things; client authentication; constrained environments; cross-domain setting; delegated authenticated authorization; protocol; resource-constrained node; smart objects; symmetric cryptography; trusted host; Authentication; Authorization; Face; Peer-to-peer computing; Performance evaluation; Protocols   (ID#:15-4059)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980443&isnumber=6980338

 

Gvoqing Lu; Lingling Zhao; Kuihe Yang, "The Design Of The Secure Transmission And Authorization Management System Based on RBAC," Machine Learning and Cybernetics (ICMLC), 2014 International Conference on , vol.1, no., pp.103,108, 13-16 July 2014. doi: 10.1109/ICMLC.2014.7009100

Abstract: This paper designs a secure transmission and authorization management system which based on the principles of Public Key Infrastructure and Rose-Based Access Control. It can solve the problems of identity authentication, secure transmission and access control on internet. In the first place, according to PKI principles, certificate authority system is implemented. It can issue and revoke the server-side and client-side digital certificate. Data secure transmission is achieved through the combination of digital certificate and SSL protocol. In addition, this paper analyses access control mechanism and RBAC model. The structure of RBAC model has been improved. The principle of group authority is added into the model and the combination of centralized authority and distributed authority management is adopted, so the model becomes more flexible.

Keywords: Internet; authorisation; public key cryptography; Internet; PKI principles; RBAC model; Rose-based access control; SSL protocol; authorization management system; centralized authority; certificate authority system; client-side digital certificate; data secure transmission; distributed authority management; group authority; identity authentication; public key infrastructure; server-side digital certificate; Abstracts; Authorization; Electronic government; Internet; Aspect-oriented programming; Digital certificate; E-Government; MVC model; PKI; RBACt   (ID#:15-4060)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009100&isnumber=7009076

 

Mercy, S.S.; Srikanth, G.U., "An Efficient Data Security System For Group Data Sharing In Cloud System Environment," Information Communication and Embedded Systems (ICICES), 2014 International Conference on, pp. 1, 4, 27-28 Feb. 2014. doi: 10.1109/ICICES.2014.7033956

Abstract: Cloud Computing delivers the service to the users by having reliable internet connection. In the secure cloud, services are stored and shared by multiple users because of less cost and data maintenance. Sharing the data is the vital intention of cloud data centres. On the other hand, storing the sensitive information is the privacy concern of the cloud. Cloud service provider has to protect the stored client's documents and applications in the cloud by encrypting the data to provide data integrity. Designing proficient document sharing among the group members in the cloud is the difficult task because of group user membership change and conserving document and group user identity confidentiality. To propose the fortified data sharing scheme in secret manner for providing efficient group revocation Advanced Encryption Standard scheme is used. Proposed System contributes efficient group authorization, authentication, confidentiality and access control and document security. To provide more data security Advanced Encryption Standard algorithm is used to encrypt the document. By asserting security and confidentiality in this proficient method securely share the document among the multiple cloud user.

Keywords: authorisation; cloud computing; cryptography; data privacy; document handling; software maintenance; software reliability; Internet connection reliability; access control; authentication; authorization; cloud computing; cloud data centres; cloud system environment; confidentiality; data encryption; data security advanced encryption standard algorithm; document conservation; document security; efficient data security system; group data sharing; group revocation advanced encryption standard scheme; group user identity confidentiality; group user membership change; privacy concern; proficient document sharing; sensitive information storage; Authorization; Cloud computing; Encryption; Servers; Cloud Computing; Document Sharing; Dynamic Group; Group Authorization   (ID#:15-4061)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033956&isnumber=7033740

 

Pawlowski, M.P.; Jara, A.J.; Ogorzalek, M.J., "Extending Extensible Authentication Protocol over IEEE 802.15.4 Networks," Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2014 Eighth International Conference on, pp. 340, 345, 2-4 July 2014. doi: 10.1109/IMIS.2014.93

Abstract: Internet into our physical world and making it present everywhere. This evolution is also raising challenges in issues such as privacy, and security. For that reason, this work is focused on the integration and lightweight adaptation of existing authentication protocols, which are able also to offer authorization and access control functionalities. In particular, this work is focused on the Extensible Authentication Protocol (EAP). EAP is widely used protocol for access control in local area networks such Wireless (802.11) and wired (802.3). This work presents an integration of the EAP frame into IEEE 802.15.4 frames, demonstrating that EAP protocol and some of its mechanisms are feasible to be applied in constrained devices, such as the devices that are populating the IoT networks.

Keywords: Internet; Zigbee; authorisation; computer network security; cryptographic protocols; wireless LAN;EAP;IEEE 802.15.4 networks; Internet; IoT networks; access control functionality; authorization; extensible authentication protocol; local area networks; Authentication; IEEE 802.15 Standards;Internet;Payloads;Protocols;Servers;802.1X;Authentication;EAP;IEEE 802.15.4; Internet of Things; Security   (ID#:15-4062)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975486&isnumber=6975399

 

Chakaravarthi, S.; Selvamani, K.; Kanimozhi, S.; Arya, P.K., "An Intelligent Agent Based Privacy Preserving Model For Web Service Security," Electrical and Computer Engineering (CCECE), 2014 IEEE 27th Canadian Conference on, pp. 1, 5, 4-7 May 2014. doi: 10.1109/CCECE.2014.6901164

Abstract: Web Service (WS) plays an important role in today's word to provide effective services for humans and these web services are built with the standard of SOAP, WSDL & UDDI. This technology enables various service providers to register and service sender their intelligent agent based privacy preserving model services to utilize the service over the internet through pre established networks. Also accessing these services need to be secured and protected from various types of attacks in the network environment. Exchanging data between two applications on a secure channel is a challenging issue in today communication world. Traditional security mechanism such as secured socket layer (SSL), Transport Layer Security (TLS) and Internet Protocol Security (IP Sec) is able to resolve this problem partially, hence this research paper proposes the privacy preserving named as HTTPI to secure the communication more efficiently. This HTTPI protocol satisfies the QoS requirements, such as authentication, authorization, integrity and confidentiality in various levels of the OSI layers. This work also ensures the QoS that covers non functional characteristics like performance (throughput), response time, security, reliability and capacity. This proposed intelligent agent based model results in excellent throughput, good response time and increases the QoS requirements.

Keywords: Web services; data privacy; electronic data interchange; multi-agent systems; quality of service; security of data; HTTPI protocol; IP Sec; Internet; Internet Protocol Security; OSI layers; QoS requirements; SOAP; SSL; TLS; Transport Layer Security; UDDI; WSDL; Web service security; data exchange; intelligent agent based privacy preserving model; secure channel; secured socket layer; Cryptography; Protocols; Quality of service; Simple object access protocol; XML; intelligent agent; privacy preserving; quality of services; uddi; web services   (ID#:15-4063)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901164&isnumber=6900900


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Authentication and Authorization (2014 Year in Review) Part 2

 

 
SoS Newsletter Logo

Authentication & Authorization
(2014 Year in Review)
Part 2

 

Authorization and authentication are cornerstones of computer security. As systems become larger, faster and more complex, authorization and authentication methods and protocols are proving to have limits and challenges. The research cited here explores new methods and techniques for improving security in cloud environments, efficient cryptographic computations, and exascale storage systems.  The work presented here was published in 2014.

 

Miao Yingkai; Chen Jia, "A Kind of Identity Authentication under Cloud Computing Environment," Intelligent Computation Technology and Automation (ICICTA), 2014 7th International Conference on, pp. 12, 15, 25-26 Oct. 2014. doi: 10.1109/ICICTA.2014.10

Abstract: An identity authentication scheme is proposed combining with biometric encryption, public key cryptography of homomorphism and predicate encryption technology under the cloud computing environment. Identity authentication scheme is proposed based on the voice and homomorphism technology. The scheme is divided into four stages, register and training template stage, voice login and authentication stage, authorization stage, and audit stage. The results prove the scheme has certain advantages in four aspects.

Keywords: authorisation; cloud computing; public key cryptography; audit stage; authorization stage; biometric encryption; cloud computing environment; encryption technology; homomorphism technology; identity authentication scheme; public key cryptography; register and training template stage; voice login and authentication stage; voice technology ;Authentication; Cloud computing; Encryption; Servers; Spectrogram; Training; cloud computing; homomorphism; identity authentication   (ID#:15-4064)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7003473&isnumber=7003462

 

Jen Ho Yang; Pei Yu Lin, "An ID-Based User Authentication Scheme for Cloud Computing," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp.98,101, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.31

Abstract: In cloud computing environments, the user authentication scheme is an important security tool because it provides the authentication, authorization, and accounting for cloud users. Therefore, many user authentication schemes for cloud computing have been proposed in recent years. However, we find that most of the previous authentication schemes have some security problems. Besides, it cannot be implemented in cloud computing. To solve the above problems, we propose a new ID-based user authentication scheme for cloud computing in this paper. Compared with the related works, the proposed scheme has higher security levels and lower computation costs. In addition, it can be easily applied to cloud computing environments. Therefore, the proposed scheme is more efficient and practical than the related works.

Keywords: authorisation; cloud computing; ID-based user authentication scheme ;authorization; cloud computing environments; Authentication; Cloud computing; Cryptography; Law; Nickel; Servers; ID-based scheme; anonymity; cloud computing; cryptography; mobile devices; user authentication   (ID#:15-4065)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6998277&isnumber=6998244

 

Singh, S.; Sharma, S., "Improving Security Mechanism To Access HDFS Data By Mobile Consumers Using Middleware-Layer Framework," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp. 1, 7, 11-13 July 2014. doi: 10.1109/ICCCNT.2014.6963051

Abstract: Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.

Keywords: authorisation; cloud computing; cryptography; information retrieval; middleware; mobile computing; protocols; social networking (online); storage management; AES method; Facebook; Gmail; HDFS data access; IT resources; MiLAMob model; OAuth 2.0 protocol; Yahoo; authorization process; cloud computing; encryption; flexible user based policies; middleware-centric framework; middleware-layer framework; mobile consumers; one-time password generation technique; online data storage system; online stored data; public cloud; security mechanism; social networking sites; tokens; Authentication; Cloud computing; Data models; Mobile communication; Permission; Social network services; Authentication; Authorization; Computing; HDFS; MiLAMob; OAuth 2.0;Security; Token   (ID#:15-4066)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963051&isnumber=6962988

 

Kun-Lin Tsai; Jiu-Soon Tan; Fang-Yie Leu; Yi-Li Huang, "A Group File Encryption Method using Dynamic System Environment Key," Network-Based Information Systems (NBiS), 2014 17th International Conference on, pp. 476, 483, 10-12 Sept. 2014. doi: 10.1109/NBiS.2014.22

Abstract: File encryption is an effective way for an enterprise to prevent its data from being lost. However, the data may still be deliberately or inadvertently leaked out by the insiders or customers. When the sensitive data are leaked, it often results in huge monetary damages and credit loss. In this paper, we propose a novel group file encryption/decryption method, named the Group File Encryption Method using Dynamic System Environment Key (GEMS for short), which provides users with auto crypt, authentication, authorization, and auditing security schemes by utilizing a group key and a system environment key. In the GEMS, the important parameters are hidden and stored in different devices to avoid them from being cracked easily. Besides, it can resist known-key and eavesdropping attacks to achieve a very high security level, which is practically useful in securing an enterprise's and a government's private data.

Keywords: authorisation; business data processing; cryptography; file organisation; message authentication; GEMS; auditing security scheme; authentication; authorization; autocrypt; decryption method; dynamic system environment key; eavesdropping attack; group file encryption; security level; Authentication; Cloud computing; Computers; Encryption; Servers; DRM; group file encryption; security; system environment key   (ID#:15-4067)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023997&isnumber=7023898

 

Demchenko, Y.; Canh Ngo; de Laat, C.; Lee, C., "Federated Access Control in Heterogeneous Intercloud Environment: Basic Models and Architecture Patterns," Cloud Engineering (IC2E), 2014 IEEE International Conference on, pp. 439, 445, 11-14 March 2014. doi: 10.1109/IC2E.2014.84

Abstract: This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by authors in earlier works. ICFF attempts to address the interoperability and integration issues in provisioning on-demand multi-provider multi-domain heterogeneous cloud infrastructure services. The paper describes the major inter-cloud federation scenarios that in general involve two types of federations: customer-side federation that includes federation between cloud based services and customer campus or enterprise infrastructure, and provider-side federation that is created by a group of cloud providers to outsource or broker their resources when provisioning services to customers. The proposed federated access control model uses Federated Identity Management (FIDM) model that can be also supported by the trusted third party entities such as Cloud Service Broker (CSB) and/or trust broker to establish dynamic trust relations between entities without previously existing trust. The research analyses different federated identity management scenarios, defines the basic architecture patterns and the main components of the distributed federated multi-domain Authentication and Authorisation infrastructure.

Keywords: authorisation; cloud computing; operating systems (computers); outsourcing; software architecture; trusted computing; CSB; FIDM model; ICAF; ICFF; architecture patterns; authorisation infrastructure; cloud based services; cloud service broker; customer campus; customer-side federation; distributed federated multidomain authentication; dynamic trust relations; enterprise infrastructure; federated access control model; federated identity management model; federated identity management scenarios; heterogeneous intercloud environment; heterogeneous multiprovider intercloud environment; heterogeneous multiprovider multicloud environment; integration issue; intercloud architecture framework; intercloud federation framework; intercloud federation scenarios; interoperability issue; on-demand multiprovider multidomain heterogeneous cloud infrastructure services; provider-side federation; resource brokering; resource outsourcing; trusted third party entities; Authorization; Cloud computing; Computer architecture; Dynamic scheduling; Organizations; Authorisation; Cloud Security infrastructure; Federated Identity Management; Federated Intercloud Access Control Infrastructure; Intercloud Architecture Framework; Intercloud Federations Framework   (ID#:15-4068)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903508&isnumber=6903436

 

Tekeni, L.; Thomson, K.-L.; Botha, R.A., "Concerns Regarding Service Authorization By IP Address Using Eduroam," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950495

Abstract: Eduroam is a secure WLAN roaming service between academic and research institutions around the globe. It allows users from participating institutions secure Internet access at any other participating visited institution using their home credentials. The authentication credentials are verified by the home institution, while authorization is done by the visited institution. The user receives an IP address in the range of the visited institution, and accesses the Internet through the firewall and proxy servers of the visited institution. However, access granted to services that authorize via an IP address of the visited institution may include access to services that are not allowed at the home institution, due to legal agreements. This paper looks at typical legal agreements with service providers and explores the risks and countermeasures that need to be considered when using eduroam.

Keywords: IP networks; Internet; authorisation; firewalls; home networks; wireless LAN;IP address; authentication credentials; eduroam; firewall; home credentials; home institution; legal agreements; proxy servers; secure Internet access; secure WLAN roaming service; service authorization; visited institution; IEEE Xplore; Servers; Authorization; IP-Based; Service Level Agreement; eduroam   (ID#:15-4069)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950495&isnumber=6950479

 

van Thuan, D.; Butkus, P.; van Thanh, D., "A User Centric Identity Management for Internet of Things," IT Convergence and Security (ICITCS), 2014 International Conference on, pp. 1, 4, 28-30 Oct. 2014. doi: 10.1109/ICITCS.2014.7021724

Abstract: In the future Internet of Things, it is envisioned that things are collaborating to serve people. Unfortunately, this vision could not be realised without relations between things and people. To solve the problem this paper proposes a user centric identity management system that incorporates user identity, device identity and the relations between them. The proposed IDM system is user centric and allows device authentication and authorization based on the user identity. A typical compelling use case of the proposed solution is also given.

Keywords: Internet of Things; authorisation; IDM system; Internet of Things; authorization; device authentication; device identity; user centric identity management; user identity; Authentication; Identity management systems; Internet of Things; Medical services; Mobile handsets   (ID#:15-4070)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7021724&isnumber=7021698

 

Matias, J.; Garay, J.; Mendiola, A.; Toledo, N.; Jacob, E., "FlowNAC: Flow-based Network Access Control," Software Defined Networks (EWSDN), 2014 Third European Workshop on, pp. 79, 84, 1-3 Sept. 2014. doi: 10.1109/EWSDN.2014.39

Abstract: This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results.

Keywords: authorisation; computer network security; cryptographic protocols; EAPoL-in-EAPoL encapsulation; FlowNAC; IEEE 802.1X; authentication; authorization; flow-based network access control; Authentication; Authorization; Ports (Computers); Protocols; Servers; Standards; Network Access Control; Security; Software Defined Networking   (ID#:15-4071)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984056&isnumber=6984033

 

Gopejenko, V.; Bobrovskis, S., "Robust Security Network Association Adjusted Hybrid Authentication Schema," Application of Information and Communication Technologies (AICT), 2014 IEEE 8th International Conference on, pp. 1, 5, 15-17 Oct. 2014. doi: 10.1109/ICAICT.2014.7035907

Abstract: Wireless network, whether it's ad-hoc or at enterprise level is vulnerable due to its features of open medium, and usually due to weak authentication, authorization, encryption, monitoring and accounting mechanisms. Various wireless vulnerability situations as well as the minimal features that are required in order to protect, monitor, account, authenticate, and authorize nodes, users, computers into the network are examined. Also, aspects of several IEEE Security Standards, which were ratified and which are still in draft are described.

Keywords: IEEE standards; authorisation; cryptography; message authentication; radio networks; telecommunication security; IEEE security standard; accounting mechanism; authorization; encryption; hybrid authentication schema; monitoring mechanism; robust security network association; weak authentication; wireless network; wireless vulnerability situation; Authentication; Communication system security; Cryptography; Robustness; Servers; Wireless communication;802.11 standards;802.1X framework; Authentication; Encryption; Extensible Authentication Protocol; Network Access Protection; Robust Secure Network; Wired Equivalent Privacy; Wireless Intrusion Detection System; Wireless Intrusion Prevention System   (ID#:15-4072)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035907&isnumber=7035893

 

Sindhu, S.M.; Kanchana, R., "Security Solutions For Web Service Attacks In A Dynamic Composition Scenario," Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, pp. 624, 628, 8-10 May 2014. doi: 10.1109/ICACCCT.2014.7019163

Abstract: Web Services can be invoked from anywhere through internet without having enough knowledge about the implementation details. In some cases, single service cannot accomplish user needs. One or more services must be composed which together satisfy the user needs. Therefore, security is the most important concern not only at single service level but also at composition level. Several attacks are possible on SOAP messages communicated among Web Services because of their standardized interfaces. Examples of Web Service attacks are oversize payload, SOAPAction spoofing, XML injection, WS-Addressing spoofing, etc. Most of the existing works provide solution to ensure basic security features of Web Services such as confidentiality, integrity, authentication, authorization, and non-repudiation. Very few of the existing works provide solutions such as schema validation and schema hardening for attacks on Web Services. But these solutions do not address and provide attack specific solutions for SOAP messages communicated between Web Service. Hence, it is proposed to provide solutions for two of the prevailing Web Service attacks. Since new types of Web Service attacks are evolving over time, the proposed security solutions are implemented as APIs that are pluggable in any server where the Web Service is deployed.

Keywords: Web services; application program interfaces; authorisation; data integrity; protocols; service-oriented architecture; API; Internet; SOA; SOAP messages; SOAPAction spoofing; WS-Addressing spoofing; Web service attacks; XML injection; authentication; authorization; confidentiality; dynamic composition scenario; integrity; nonrepudiation; schema hardening; schema validation; security solutions; service oriented architecture; simple object access protocol; Electronic publishing ;Information services; Lead; Security; Simple object access protocol; Standards; SAS API; SOAP; UDDI; WSAS API; WSDL; Web Services   (ID#:15-4073)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019163&isnumber=7019129

 

Albino Pereira, A.; Bosco M.Sobral, J.; Merkle Westphall, C., "Towards Scalability for Federated Identity Systems for Cloud-Based Environments," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 5, March 30, 2014-April 2, 2014. doi: 10.1109/NTMS.2014.6814055

Abstract: As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.

Keywords: authorisation; cloud computing; cryptographic protocols; CAS; Shibboleth architecture; central authentication protocols; central authentication service; cloud based environments; cloud computing; federated identity management systems; federated identity system scalability; multitenant authorization; sticky session mechanism; Authentication; Cloud computing; Proposals; Scalability; Servers; Virtual machining   (ID#:15-4074)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814055&isnumber=6813963

 

Yi-Hui Chen; Chi-Shiang Chan; Po-Yu Hsu; Wei-Lin Huang, "Tagged Visual Cryptography With Access Control," Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on, pp. 1, 5, 14-18 July 2014. doi: 10.1109/ICMEW.2014.6890648

Abstract: Visual cryptography is a way to encrypt the secret image into several meaningless share images. Noted that no information can be obtained if not all of the shares are collected. Stacking the share images, the secret image can be retrieved. The share images are meaningless to owner which results in difficult to manage. Tagged visual cryptography is a skill to print a pattern onto meaningless share images. After that, users can easily manage their own share images according to the printed pattern. Besides, access control is another popular topic to allow a user or a group to see the own authorizations. In this paper, a self-authentication mechanism with lossless construction ability for image secret sharing scheme is proposed. The experiments provide the positive data to show the feasibility of the proposed scheme.

Keywords: authorisation; cryptography; image coding; message authentication; access control; authorization; image secret sharing scheme; lossless construction ability; meaningless share images; printed pattern; secret image ;self-authentication mechanism; tagged visual cryptography; Authentication; Encryption; Equations; Pattern recognition; Stacking; Visualization; Visual cryptography; access control; secret sharing; tagged visual cryptography   (ID#:15-4075)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890648&isnumber=6890528

 

Raut, R.D.; Kulkarni, S.; Gharat, N.N., "Biometric Authentication Using Kekre's Wavelet Transform," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on, pp. 99, 104, 9-11 Jan. 2014. doi: 10.1109/ICESC.2014.22

Abstract: This paper proposes an enhanced method for personal authentication based on finger Knuckle Print using Kekre's wavelet transform (KWT). Finger-knuckle-print (FKP) is the inherent skin patterns of the outer surface around the phalangeal joint of one's finger. It is highly discriminable and unique which makes it an emerging promising biometric identifier. Kekre's wavelet transform is constructed from Kekre's transform. The proposed system is evaluated on prepared FKP database that involves all categories of FKP. The total database of 500 samples of FKP. This paper focuses the different image enhancement techniques for the pre-processing of the captured images. The proposed algorithm is examined on 350 training and 150 testing samples of database and shows that the quality of database and pre-processing techniques plays important role to recognize the individual. The experimental result calculate the performance parameters like false acceptance rate (FAR), false rejection rate (FRR), True Acceptance rate (TAR), True rejection rate (TRR). The tested result demonstrated the improvement in EER (Error Equal Rate) which is very much important for authentication. The experimental result using Kekre's algorithm along with image enhancement shows that the finger knuckle recognition rate is better than the conventional method.

Keywords: authorisation; biometrics (access control); image enhancement; image recognition; skin; wavelet transforms; EER; FAR; FKP database; FRR; KWT; Kekre wavelet transform; TAR;TRR; biometric authentication ;error equal rate; false acceptance rate; false rejection rate; finger knuckle print; finger knuckle recognition rate; image enhancement; personal authentication; phalangeal joint; true acceptance rate; true rejection rate; Authentication; Databases; Feature extraction; Thumb; Wavelet transforms; Biometric; EER; Finger knuckle print; Kekre's Transform; Kekre's wavelet Transform   (ID#:15-4076)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745354&isnumber=6745317

 

Izu, T.; Sakemi, Y.; Takenaka, M.; Torii, N., "A Spoofing Attack against a Cancelable Biometric Authentication Scheme," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp. 234, 239, 13-16 May 2014. doi: 10.1109/AINA.2014.33

Abstract: ID/password-based authentication is commonly used in network services. Some users set different ID/password pairs for different services, but other users reuse a pair of ID/password to other services. Such recycling allows the list attack in which an adversary tries to spoof a target user by using a list of IDs and passwords obtained from other system by some means (an insider attack, malwares, or even a DB leakage). As a countermeasure agains the list attack, biometric authentication attracts much attention than before. In 2012, Hattori et al. proposed a cancelable biometrics authentication scheme (fundamental scheme) based on homomorphic encryption algorithms. In the scheme, registered biometric information (template) and biometric information to compare are encrypted, and the similarity between these biometric information is computed with keeping encrypted. Only the privileged entity (a decryption center), who has a corresponding decryption key, can obtain the similarity by decrypting the encrypted similarity and judge whether they are same or not. Then, Hirano et al. showed the replay attack against this scheme, and, proposed two enhanced authentication schemes. In this paper, we propose a spoofing attack against the fundamental scheme when the feature vector, which is obtained by digitalizing the analogue biometric information, is represented as a binary coding such as Iris Code and Competitive Code. The proposed attack uses an unexpected vector as input, whose distance to all possible binary vectors is constant. Since the proposed attack is independent from the replay attack, the attack is also applicable to two revised schemes by Hirano et al. as well. Moreover, this paper also discusses possible countermeasures to the proposed spoofing attack. In fact, this paper proposes a countermeasure by detecting such unexpected vector.

Keywords: authorisation; biometrics (access control); cryptography; ID-password-based authentication; IrisCode; analogue biometric information; binary coding; biometric information; cancelable biometric authentication scheme; competitive code; decryption key; feature vector; homomorphic encryption algorithms; list attack; network services; privileged entity; registered biometric information; replay attack; spoofing attack; unexpected vector; Authentication; Encryption; Public key; Servers ;Vectors   (ID#:15-4077)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838670&isnumber=6838626

 

Buranasaksee, U.; Porkaew, K.; Supasitthimethee, U., "AccAuth: Accounting System for OAuth Protocol," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, pp.8,13, 17-19 Feb. 2014. doi: 10.1109/ICADIWT.2014.6814698

Abstract: When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.

Keywords: accounting; authorisation; cloud computing; protocols; AccAuth; OAuth protocol; accounting layer; accounting process; accounting protocols; authorized resource usage; cloud computing protected resource access; resource usage monitor; service provider; third party application; user-centric accounting model; Authentication; Authorization; Computer architecture; Context; Protocols; Servers; Standards; accounting; authorized usage; cloud computing; delegation; three-party communication protocol   (ID#:15-4078)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814698&isnumber=6814661

 

Kumari, S.; Om, H., "Remote Login Password Authentication Scheme Based on Cuboid Using Biometric," Information Technology (ICIT), 2014 International Conference on, pp. 190, 194, 22-24 Dec. 2014. doi: 10.1109/ICIT.2014.48

Abstract: In this paper, we propose a remote password authentication scheme based on 3-D geometry with biometric value of a user. It is simple and practically useful and also a legal user can freely choose and change his password using smart card that contains some information. The security of the system depends on the points on the diagonal of a cuboid in 3D environment. Using biometric value makes the points more secure because the characteristics of the body parts cannot be copied or stolen.

Keywords: authorisation; biometrics (access control); computational geometry; smart cards ;3-D geometry;3D environment; biometric value; cuboid diagonal; remote login password authentication scheme; smart card; system security; Authentication; Bismuth; Computers; Fingerprint recognition; Servers; Smart cards;3-D geometry; Authentication; Biometric value; Cuboid; One way function; Password   (ID#:15-4079)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033320&isnumber=7033273

 

Liew Tze Hui; Bashier, H.K.; Lau Siong Hoe; Michael, G.K.O.; Wee Kouk Kwee, "Conceptual Framework For High-End Graphical Password," Information and Communication Technology (ICoICT), 2014 2nd International Conference on, pp. 64, 68, 28-30 May 2014. doi: 10.1109/ICoICT.2014.6914041

Abstract: User authentication depends largely on the concept of passwords. However, users find it difficult to remember alphanumerical passwords over time. When user is required to choose a secure password, they tend to choose an easy, short and insecure password. Graphical password method is proposed as an alternative solution to text-based alphanumerical passwords. The reason of such proposal is that human brain is better in recognizing and memorizing pictures compared to traditional alphanumerical string. Therefore, in this paper, we propose a conceptual framework to better understand the user performance for new high-end graphical password method. Our proposed framework is based on hybrid approach combining different features into one. The user performance experimental analysis pointed out the effectiveness of the proposed framework.

Keywords: authorisation; graphical user interfaces; human factors; graphical password method; high-end graphical password; secure password; text-based alphanumerical passwords; user authentication; user performance experimental analysis; Authentication; Communications technology; Complexity theory; Databases; Face; Proposals; Graphical password; authentication; usability   (ID#:15-4080)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914041&isnumber=6914027

 

Arimura, S.; Fujita, M.; Kobayashi, S.; Kani, J.; Nishigaki, M.; Shiba, A., "i/k-Contact: A Context-Aware User Authentication Using Physical Social Trust," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp. 407, 413, 23-24 July 2014. doi: 10.1109/PST.2014.6890968

Abstract: In recent years, with growing demands towards big data application, various research on context-awareness has once again become active. This paper proposes a new type of context-aware user authentication that controls the authentication level of users, using the context of “physical trust relationship” that is built between users by visual contact. In our proposal, the authentication control is carried out by two mechanisms; “i-Contact” and “k-Contact”. i-Contact is the mechanism that visually confirms the user (owner of a mobile device) using the surrounding users' eyes. The authenticity of users can be reliably assessed by the people (witnesses), even when the user exhibits ambiguous behavior. k-Contact is the mechanism that dynamically changes the authentication level of each user using the context information collected through i-Contact. Once a user is authenticated by eyewitness reports, the user is no longer prompted for a password to unlock his/her mobile device and/or to access confidential resources. Thus, by leveraging the proposed authentication system, the usability for only trusted users can be securely enhanced. At the same time, our proposal anticipates the promotion of physical social communication as face-to-face communication between users is triggered by the proposed authentication system.

Keywords: authorisation; trusted computing; ubiquitous computing; Big Data application; authentication control; authentication system; context-aware user authentication; i-k Contact mechanism; physical social trust; physical trust relationship; visual contact; Authentication; Companies; Context; Mobile handsets; Servers; Visualization; Context-aware security; Mobile-device-management (MDM);Physical communication; Social trust; User authentication; Visual contact   (ID#:15-4081)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890968&isnumber=6890911

 

Jan, M.A.; Nanda, P.; Xiangjian He; Zhiyuan Tan; Ren Ping Liu, "A Robust Authentication Scheme for Observing Resources in the Internet of Things Environment," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 205, 211, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.31

Abstract: The Internet of Things is a vision that broadens the scope of the internet by incorporating physical objects to identify themselves to the participating entities. This innovative concept enables a physical device to represent itself in the digital world. There are a lot of speculations and future forecasts about the Internet of Things devices. However, most of them are vendor specific and lack a unified standard, which renders their seamless integration and interoperable operations. Another major concern is the lack of security features in these devices and their corresponding products. Most of them are resource-starved and unable to support computationally complex and resource consuming secure algorithms. In this paper, we have proposed a lightweight mutual authentication scheme which validates the identities of the participating devices before engaging them in communication for the resource observation. Our scheme incurs less connection overhead and provides a robust defence solution to combat various types of attacks.

Keywords: Internet of Things; authorisation; Internet of things environment; computationally complex algorithms; digital world; interoperable operations; participating entities; physical objects; resource consuming secure algorithms; robust authentication scheme; seamless integration; security features; Authentication; Cryptography; Internet; Payloads; Robustness; Servers; Authentication; CoAP; Conditional Option; Internet of Things (IoT); Resource Observation   (ID#:15-4082)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011252&isnumber=7011202

 

Uymatiao, M.L.T.; Yu, W.E.S., "Time-based OTP Authentication Via Secure Tunnel (TOAST): A Mobile TOTP Scheme Using TLS Seed Exchange And Encrypted Offline Keystore," Information Science and Technology (ICIST), 2014 4th IEEE International Conference on, pp. 225, 229, 26-28 April 2014. doi: 10.1109/ICIST.2014.6920371

Abstract: The main objective of this research is to build upon existing cryptographic standards and web protocols to design an alternative multi-factor authentication cryptosystem for the web. It involves seed exchange to a software-based token through a login-protected Transport Layer Security (TLS/SSL) tunnel, encrypted local storage through a password-protected keystore (BC UBER) with a strong key derivation function (PBEWithSHAANDTwofish-CBC), and offline generation of one-time passwords through the TOTP algorithm (IETF RFC 6239). Authentication occurs through the use of a shared secret (the seed) to verify the correctness of the one-time password used to authenticate. With the traditional use of username and password no longer wholly adequate for protecting online accounts, and with regulators worldwide toughening up security requirements (i.e. BSP 808, FFIEC), this research hopes to increase research effort on further development of cryptosystems involving multi-factor authentication.

Keywords: authorisation; cryptography; BC UBER keystore; IETF RFC 6239 standard; PBEWithSHAANDTwofish-CBC function; TLS seed exchange; TOAST scheme; TOTP algorithm; Web protocols; cryptographic standards; cryptosystems development; encrypted offline keystore; mobile TOTP scheme; multifactor authentication; multifactor authentication cryptosystem; one-time password; password-protected keystore; secure tunnel; security requirements; software-based token; strong key derivation function; time-based OTP authentication; transport layer security; Authentication; Cryptography; Google; Mobile communication; Radiation detectors; Servers   (ID#:15-4083)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6920371&isnumber=6920317

 

Min Li; Xin Lv; Wei Song; Wenhuan Zhou; Rongzhi Qi; Huaizhi Su, "A Novel Identity Authentication Scheme of Wireless Mesh Network Based on Improved Kerberos Protocol," Distributed Computing and Applications to Business, Engineering and Science (DCABES), 2014 13th International Symposium on, pp.190,194, 24-27 Nov. 2014. doi: 10.1109/DCABES.2014.41

Abstract: The traditional Kerberos protocol exists some limitations in achieving clock synchronization and storing key, meanwhile, it is vulnerable from password guessing attack and attacks caused by malicious software. In this paper, a new authentication scheme is proposed for wireless mesh network. By utilizing public key encryption techniques, the security of the proposed scheme is enhanced. Besides, timestamp in the traditional protocol is replaced by random numbers to implementation cost. The analysis shows that the improved authentication protocol is fit for wireless Mesh network, which can make identity authentication more secure and efficient.

Keywords: cryptographic protocols; public key cryptography; synchronisation; wireless mesh networks; authentication protocol; clock synchronization; identity authentication scheme; improved Kerberos protocol; malicious software; password guessing attack; public key encryption; random numbers; storing key; wireless mesh network; Authentication; Authorization; Protocols; Public key; Servers; Wireless mesh networks; Kerberos protocol; Wireless Mesh network; identity Authentication; public key encryption   (ID#:15-4084)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999085&isnumber=6999036


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Automated Response Actions (2014 Year in Review)

 

 
SoS Newsletter Logo

Automated Response Actions
(2014 Year in Review)

 

A recurring problem in cybersecurity is the need to automate systems to reduce human effort and error and to be able to react rapidly and accurately to an intrusion or insertion. The nine articles cited here describe a number of interesting approaches and a novel study using sunglass reflections to reconstruct keypad use on cellphones and other mobile devices. 

 

Zonouz, S.A.; Khurana, H.; Sanders, W.H.; Yardley, T.M., "RRE: A Game-Theoretic Intrusion Response and Recovery Engine," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 2, pp.395, 406, Feb. 2014. doi: 10.1109/TPDS.2013.211

Abstract: Preserving the availability and integrity of networked computing systems in the face of fast-spreading intrusions requires advances not only in detection algorithms, but also in automated response techniques. In this paper, we propose a new approach to automated response called the response and recovery engine (RRE). Our engine employs a game-theoretic response strategy against adversaries modeled as opponents in a two-player Stackelberg stochastic game. The RRE applies attack-response trees (ART) to analyze undesired system-level security events within host computers and their countermeasures using Boolean logic to combine lower level attack consequences. In addition, the RRE accounts for uncertainties in intrusion detection alert notifications. The RRE then chooses optimal response actions by solving a partially observable competitive Markov decision process that is automatically derived from attack-response trees. To support network-level multiobjective response selection and consider possibly conflicting network security properties, we employ fuzzy logic theory to calculate the network-level security metric values, i.e., security levels of the system's current and potentially future states in each stage of the game. In particular, inputs to the network-level game-theoretic response selection engine, are first fed into the fuzzy system that is in charge of a nonlinear inference and quantitative ranking of the possible actions using its previously defined fuzzy rule set. Consequently, the optimal network-level response actions are chosen through a game-theoretic optimization process. Experimental results show that the RRE, using Snort's alerts, can protect large networks for which attack-response trees have more than 500 nodes.

Keywords: Boolean functions; Markov processes; computer network security; decision theory; fuzzy set theory; stochastic games; trees (mathematics);ART; Boolean logic; RRE; Snort alerts; attack-response trees; automated response techniques; detection algorithms; fuzzy logic theory; fuzzy rule set; fuzzy system; game-theoretic intrusion response and recovery engine strategy; game-theoretic optimization process; intrusion detection; lower level attack consequences; network level game-theoretic response selection engine; network security property; network-level multiobjective response selection; network-level security metric values; networked computing systems; nonlinear inference; optimal network-level response actions; partially observable competitive Markov decision process; system-level security events; two-player Stackelberg stochastic game; Computers; Engines; Games; Markov processes; Security; Subspace constraints; Uncertainty; Computers; Engines; Games; Intrusion response systems; Markov decision processes; Markov processes; Security; Subspace constraints; Uncertainty; and fuzzy logic and control; network state estimation; stochastic games   (ID#:15-4009)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6583161&isnumber=6689796

 

Ling-Xi Peng; Tian-Wei Chen, "Automated Intrusion Response System Algorithm with Danger Theory," Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2014 International Conference on, pp.31,34, 13-15 Oct. 2014. doi: 10.1109/CyberC.2014.16

Abstract: Intrusion response is a new generation of technology basing on active defence idea, which has very prominent significance on the protection of network security. However, the existing automatic intrusion response systems are difficult to judge the real "danger" of invasion or attack. In this study, an immune-inspired adaptive automated intrusion response system model, named as AIAIM, was given. With the descriptions of self, non-self, memory detector, mature detector and immature detector of the network transactions, the real-time network danger evaluation equations of host and network are built up. Then, the automated response polices are taken or adjusted according to the real-time danger and attack intensity, which not only solve the problem that the current automated response system models could not detect the true intrusions or attack actions, but also greatly reduce the response times and response costs. Theory analysis and experimental results prove that AIAIM provides a positive and active network security method, which will help to overcome the limitations of traditional passive network security system.

Keywords: artificial immune systems; computer network security; adaptive automated intrusion response system; artificial immune system; danger theory; immature detector; memory detector; network security; real-time network danger evaluation equation; Communication networks; Detectors; Distributed computing; Knowledge discovery; Mathematical model; Real-time systems; Security; artificial immune; automated intrusion response system; danger evaluation   (ID#:15-4010)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984277&isnumber=6984259

 

de Oliveira Saraiva, F.; Nobuhiro Asada, E., "Multi-Agent Systems Applied To Topological Reconfiguration Of Smart Power Distribution Systems," Neural Networks (IJCNN), 2014 International Joint Conference on, pp. 2812, 2819, 6-11 July 2014. doi: 10.1109/IJCNN.2014.6889791

Abstract: One of the various features expected for a smart power distribution system - a smart grid in the power distribution level - is the possibility of the fully automated operation for certain control actions. Although this is very expected, it requires various logic, sensor and actuator technologies in a system which, historically, has a low level of automation. One of the most analyzed problems for the distribution system is the topology reconfiguration. The reconfiguration has been applied to various objectives: minimization of power losses, voltage regulation, load balancing, to name a few. The solution method in most cases is centralized and its application is not in real-time. From the new perspectives of advanced distribution systems, fast and adaptive response of the control actions are required, specially in the presence of alternative generation sources and electrical vehicles. In this context, the multi-agent system, which embeds the necessary control actions and decision making is proposed for the topology reconfiguration aiming the loss reduction. The concept of multi-agent system for distribution system is proposed and two case studies with 11-Bus and 16-Bus system are presented.

Keywords: decision making; multi-agent systems; power distribution control; smart power grids; 11-Bus system;16-Bus system; alternative generation sources; control action adaptive response; decision making; electrical vehicles; load balancing; multiagent systems; power loss minimization; power loss reduction; smart grid; smart power distribution systems; topology reconfiguration; voltage regulation; Decision making; Minimization; Multi-agent systems; Power distribution; Smart grids; Substations; Topology   (ID#:15-4011)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889791&isnumber=6889358

 

Yanfei Guo; Lama, P.; Changjun Jiang; Xiaobo Zhou, "Automated and Agile Server ParameterTuning by Coordinated Learning and Control," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 4, pp.876, 886, April 2014. doi: 10.1109/TPDS.2013.115

Abstract: Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.

Keywords: Internet; control engineering computing; fault tolerant computing; fuzzy control; learning (artificial intelligence); neurocontrollers; self-adjusting systems; telecommunication computing; virtualisation; WikiBench benchmark application; agile parameter tuning; agile server parameter tuning; automated server parameter tuning; average response time; bursty workloads; cloud environments; coordinated learning and control; decision making process; effective throughput; model independence; multitier Internet applications; multitier applications; multitier service architecture; neural fuzzy control; neural networks; online learning; parameter tuning direction; predefined adjustment value; quantitative control output; reinforcement learning based server parameter tuning approach; self-adaptiveness; system throughput ;trial-and-error; virtualized data center hosting RUBiS; virtualized server infrastructure; Fuzzy control; Internet; Neurons; Servers; Throughput; Time factors; Tuning; Automated server parameter tuning; autonomic computing ;internet applications; neural fuzzy control   (ID#:15-4012)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6497051&isnumber=6750096

 

Shahgoshtasbi, D.; Jamshidi, M.M., "A New Intelligent Neuro–Fuzzy Paradigm for Energy-Efficient Homes," Systems Journal, IEEE, vol. 8, no. 2, pp.664, 673, June 2014. doi: 10.1109/JSYST.2013.2291943

Abstract: Demand response (DR), which is the action voluntarily taken by a consumer to adjust amount or timing of its energy consumption, has an important role in improving energy efficiency. With DR, we can shift electrical load from peak demand time to other periods based on changes in price signal. At residential level, automated energy management systems (EMS) have been developed to assist users in responding to price changes in dynamic pricing systems. In this paper, a new intelligent EMS (iEMS) in a smart house is presented. It consists of two parts: a fuzzy subsystem and an intelligent lookup table. The fuzzy subsystem is based on its fuzzy rules and inputs that produce the proper output for the intelligent lookup table. The second part, whose core is a new model of an associative neural network, is able to map inputs to desired outputs. The structure of the associative neural network is presented and discussed. The intelligent lookup table takes three types of inputs that come from the fuzzy subsystem, outside sensors, and feedback outputs. Whatever is trained in this lookup table are different scenarios in different conditions. This system is able to find the best energy-efficiency scenario in different situations.

Keywords: energy management systems; fuzzy set theory; home automation; neural nets; power engineering computing; table lookup; DR; associative neural network; automated energy management systems; demand response; energy-efficient homes; fuzzy rules; fuzzy subsystem;  iEMS; Intelligent EMS; intelligent lookup table; intelligent neuro-fuzzy paradigm; smart house; Energy consumption; Energy management; Home appliances; Neural networks; Neurons; Pricing; Smart grids; Demand response (DR);energy efficiency; fuzzy logic; neural networks; smart grid   (ID#:15-4013)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6705637&isnumber=6819870

 

Bande, V.; Pop, S.; Pitica, D., "Smart Diagnose Procedure For Data Acquisition Systems Inside Dams," Design and Technology in Electronic Packaging (SIITME), 2014 IEEE 20th International Symposium for, vol., no., pp.179,182, 23-26 Oct. 2014. doi: 10.1109/SIITME.2014.6967022

Abstract: This scientific paper reveals an intelligent system for data acquisition for dam monitoring and diagnose. This system is built around the RS485 communication standard and uses its own communication protocol [2]. The aim of the system is to monitor all signal levels inside the communication bus, respectively to detect the out of action data loggers. The diagnose test extracts the following functional parameters: supply voltage and the absolute value and common mode value for differential signals used in data transmission (denoted with “A” and “B”). Analyzing this acquired information, it's possible to find short-circuits or open-circuits across the communication bus. The measurement and signal processing functions, for flaws, are implemented inside the system's central processing unit. The next testing step is finding the out of action data loggers and is being made by trying to communicate with every data logger inside the network. The lack of any response from a data logger is interpreted as an error and using the code of the data logger's microcontroller, it is possible to find its exact position inside the dam infrastructure. The novelty of this procedure is the fact that it completely automates the diagnose procedure, which, until now, was made visually by checking every data logger.

Keywords: dams; data acquisition; data loggers; field buses ;microcontrollers; protocols; signal processing; structural engineering computing;RS485 communication protocol standard; communication bus; dam monitoring; data acquisition system; data logger; data transmission; differential signal; microcontroller; open-circuit; short-circuit; signal processing; smart diagnose procedure; Central Processing Unit; Electronics packaging; Protocols; Temperature measurement; Testing; Transducers; Voltage measurement; dam; diagnose; protocol; sensor; system   (ID#:15-4014)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6967022&isnumber=6966980

 

Popli, N.; Ilic, M.D., "Storage Devices for Automated Frequency Regulation and Stabilization," PES General Meeting | Conference & Exposition, 2014 IEEE, pp. 1, 5, 27-31 July 2014. doi: 10.1109/PESGM.2014.6939861

Abstract: In this paper we propose a framework for automating feedback control to balance hard-to-predict wind power variations. The power imbalance is a result of non-zero mean error around the wind power forecast. Our proposed framework is aimed at achieving the objective of frequency stabilization and regulation through one control action. A case-study for a real-world system on Flores island in Portugal is provided. Using a battery-based storage on the island, we illustrate the proposed control framework.

Keywords: battery storage plants; feedback; frequency control; frequency stability; wind power plants; Flores island; automated frequency regulation; automated frequency stabilization; battery-based storage ;feedback control; hard-to-predict wind power variations; non-zero mean error; power imbalance; storage devices; wind power forecast; Automatic generation control; Batteries; Frequency control; Generators; Jacobian matrices; Wind forecasting; Wind power generation; Automatic Generation Control (AGC); Battery; Frequency Regulation; Frequency Stabilization; Governor Response; Singular Power Flow Jacobian; Slack Bus   (ID#:15-4015)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6939861&isnumber=6938773

 

Kurian, N.A.; Thomas, A.; George, B., "Automated Fault Diagnosis in Multiple Inductive Loop Detectors," India Conference (INDICON), 2014 Annual IEEE, pp. 1, 5, 11-13 Dec. 2014. doi: 10.1109/INDICON.2014.7030431

Abstract: Multiple Inductive Loop Detectors are advanced Inductive Loop Sensors that can measure traffic flow parameters in even conditions where the traffic is heterogeneous and does not conform to lanes. This sensor consists of many inductive loops in series, with each loop having a parallel capacitor across it. These inductive and capacitive elements of the sensor may undergo open or short circuit faults during operation. Such faults lead to erroneous interpretation of data acquired from the loops. Conventional methods used for fault diagnosis in inductive loop detectors consume time and effort as they require experienced technicians and involve extraction of loops from the saw-cut slots on the road. This also means that the traffic flow parameters cannot be measured until the sensor system becomes functional again. The repair activities would also disturb traffic flow. This paper presents a method for automating fault diagnosis for series-connected Multiple Inductive Loop Detectors, based on an impulse test. The system helps in the diagnosis of open/short faults associated with the inductive and capacitive elements of the sensor structure by displaying the fault status conveniently. Since the fault location as well as the fault type can be precisely identified using this method, the repair actions are also localised. The proposed system thereby results in significant savings in both repair time and repair costs. An embedded system was developed to realize this scheme and the same was tested on a loop prototype.

Keywords: embedded systems; fault location; inductive sensors; automated fault diagnosis; embedded system; fault location; series-connected multiple inductive loop detectors; traffic flow detectors; Circuit faults; Detectors; Fault diagnosis; Frequency response; Resonant frequency; Vehicles ;Embedded System ;Fault Diagnosis; Multiple Inductive Loop Detectors; Transfer Function   (ID#:15-4016)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030431&isnumber=7030354


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Big Data Security Issues (2014 Year in Review)

 

 
SoS Newsletter Logo

Big Data Security Issues
(2014 Year in Review)

 

Big data security is a growing area of interest for researchers. The work presented here ranges from cyber-threat detection in critical infrastructures to privacy protection. This work was presented and published in 2014.

 

Mittal, D.; Kaur, D.; Aggarwal, A., "Secure Data Mining in Cloud Using Homomorphic Encryption," Cloud Computing in Emerging Markets (CCEM), 2014 IEEE International Conference on, pp.1,7, 15-17 Oct. 2014. doi: 10.1109/CCEM.2014.7015496

Abstract: With the advancement in technology, industry, e-commerce and research a large amount of complex and pervasive digital data is being generated which is increasing at an exponential rate and often termed as big data. Traditional Data Storage systems are not able to handle Big Data and also analyzing the Big Data becomes a challenge and thus it cannot be handled by traditional analytic tools. Cloud Computing can resolve the problem of handling, storage and analyzing the Big Data as it distributes the big data within the cloudlets. No doubt, Cloud Computing is the best answer available to the problem of Big Data storage and its analyses but having said that, there is always a potential risk to the security of Big Data storage in Cloud Computing, which needs to be addressed. Data Privacy is one of the major issues while storing the Big Data in a Cloud environment. Data Mining based attacks, a major threat to the data, allows an adversary or an unauthorized user to infer valuable and sensitive information by analyzing the results generated from computation performed on the raw data. This thesis proposes a secure k-means data mining approach assuming the data to be distributed among different hosts preserving the privacy of the data. The approach is able to maintain the correctness and validity of the existing k-means to generate the final results even in the distributed environment.

Keywords: Big Data; cloud computing; cryptography; data analysis; data mining; data privacy; Big Data; cloud computing; data analysis; data mining security; data privacy; data storage systems; homomorphic encryption; k-means data mining approach; Cloud computing; Data privacy; Databases; Encryption   (ID#:15-4017)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015496&isnumber=7015466

 

Miloslavskaya, N.; Senatorov, M.; Tolstoy, A.; Zapechnikov, S., "Information Security Maintenance Issues for Big Security-Related Data," Future Internet of Things and Cloud (FiCloud), 2014 International Conference on, pp.361,366, 27-29 Aug. 2014. doi: 10.1109/FiCloud.2014.64

Abstract: The need to protect big data, particularly those relating to information security (IS) maintenance (ISM) of an enterprise's IT infrastructure, is shown. A worldwide experience of addressing big data ISM issues is briefly summarized and a big data protection problem statement is formulated. An infrastructure for big data ISM is proposed. New applications areas for big data IT after addressing ISM issues are listed in conclusion.

Keywords: Big Data; security of data; ISM; IT infrastructure; big data protection problem statement; big security-related data; information security maintenance; information security maintenance issues; Arrays; Big data; Data models; Data visualization; Distributed databases; Real-time systems; Security; big data; data visualization; information security; secure infrastructure; security-related data   (ID#:15-4018)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984220&isnumber=6984143

 

Kan Yang; Xiaohua Jia; Kui Ren; Ruitao Xie; Liusheng Huang, "Enabling Efficient Access Control With Dynamic Policy Updating For Big Data In The Cloud," INFOCOM, 2014 Proceedings IEEE, pp.2013,2021, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848142

Abstract: Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analysis show that our scheme is correct, complete, secure and efficient.

Keywords: Big Data; authorisation; cloud computing; cryptography; ABE; Big Data; access control; access policy; attribute-based encryption; cloud; dynamic policy updating; end-to-end security; outsourced policy updating method; Access control; Big data; Encryption; Public key; Servers; ABE; Access Control; Big Data; Cloud; Policy Updating   (ID#:15-4019)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848142&isnumber=6847911

 

Marchal, S.; Xiuyan Jiang; State, R.; Engel, T., "A Big Data Architecture for Large Scale Security Monitoring," Big Data (BigData Congress), 2014 IEEE International Congress on, pp. 56, 63, June 27 2014-July 2 2014. doi: 10.1109/BigData.Congress.2014.18

Abstract: Network traffic is a rich source of information for security monitoring. However the increasing volume of data to treat raises issues, rendering holistic analysis of network traffic difficult. In this paper we propose a solution to cope with the tremendous amount of data to analyse for security monitoring perspectives. We introduce an architecture dedicated to security monitoring of local enterprise networks. The application domain of such a system is mainly network intrusion detection and prevention, but can be used as well for forensic analysis. This architecture integrates two systems, one dedicated to scalable distributed data storage and management and the other dedicated to data exploitation. DNS data, NetFlow records, HTTP traffic and honeypot data are mined and correlated in a distributed system that leverages state of the art big data solution. Data correlation schemes are proposed and their performances are evaluated against several well-known big data framework including Hadoop and Spark.

Keywords: Big Data; computer network security; data mining; digital forensics; storage management; telecommunication traffic; transport protocols; Big Data architecture; DNS data; HTTP traffic; Hadoop; NetFlow records; Spark; data correlation schemes; data exploitation; distributed system; forensic analysis; honeypot data; large scale security monitoring; local enterprise networks; network intrusion detection; network intrusion prevention; network traffic; scalable distributed data management; scalable distributed data storage; Big data; Correlation; Distributed databases; IP networks; Monitoring; Security   (ID#:15-4020)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906761&isnumber=6906742

 

Peng Li; Song Guo, "Load Balancing For Privacy-Preserving Access To Big Data In Cloud," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp. 524,  528, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849286

Abstract: In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.

Keywords: Big Data; cloud computing; computational complexity; data protection; distributed databases; file servers; information retrieval; random processes; resource allocation; storage management; Big Data; NP-hardness; ORAM algorithm; cloud storage; data availability; data content protection; data maintenance cost reduction;  data management; data placement problem; data security; distributed file system; encryption; file server; geo-distributed cloud site; load balanced storage system; low-complexity algorithm; privacy preserving access; random data placement algorithm; responsiveness; storage server; Big data; Cloud computing; Conferences; Data privacy ;Random access memory; Security; Servers   (ID#:15-4021)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849286&isnumber=6849127

 

Murthy, Praveen K., "Top Ten Challenges In Big Data Security And Privacy," Test Conference (ITC), 2014 IEEE International, pp.1,1, 20-23 Oct. 2014. doi: 10.1109/TEST.2014.7035307

Abstract: Security and privacy issues are magnified by the velocity, volume, and variety of Big Data, such as large-scale cloud infrastructures, diversity of data sources and formats, streaming nature of data acquisition and high volume inter-cloud migration. Therefore, traditional security mechanisms, which are tailored to securing small-scale, static (as opposed to streaming) data, are inadequate. In this talk we highlight the top ten Big Data security and privacy challenges. Highlighting the challenges will motivate increased focus on fortifying Big Data infrastructures.

Keywords:  (not provided)  (ID#:15-4022)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035307&isnumber=7035243

 

Lei Xu; Pham Dang Khoa; Seung Hun Kim; Won Woo Ro; Weidong Shi, "LUT Based Secure Cloud Computing — An Implementation Using FPGAs," ReConFigurable Computing and FPGAs (ReConFig), 2014 International Conference on, pp. 1, 6, 8-10 Dec. 2014. doi: 10.1109/ReConFig.2014.7032537

Abstract: Cloud computing is widely deployed to handle challenges such as big data processing and storage. Due to the outsourcing and sharing feature of cloud computing, security is one of the main concerns that hinders the end users to shift their businesses to the cloud. A lot of cryptographic techniques have been proposed to alleviate the data security issues in cloud computing, but most of these works focus on solving a specific security problem such as data sharing, comparison, searching, etc. At the same time, little efforts have been done on program security and formalization of the security requirements in the context of cloud computing. We propose a formal definition of the security of cloud computing, which captures the essence of the security requirements of both data and program. Analysis of some existing technologies under the proposed definition shows the effectiveness of the definition. We also give a simple look-up table based solution for secure cloud computing which satisfies the given definition. As FPGA uses look-up table as its main computation component, it is a suitable hardware platform for the proposed secure cloud computing scheme. So we use FPGAs to implement the proposed solution for k-means clustering algorithm, which shows the effectiveness of the proposed solution.

Keywords: Big Data; cloud computing; field programmable gate arrays; pattern clustering; security of data; table lookup; FPGA; LUT based secure cloud computing; big data processing; cryptographic techniques; data security problem; data sharing; formalization; k-means clustering algorithm; look-up table; program security; secure cloud computing scheme; security requirements; suitable hardware platform; Cloud computing; Encryption; Field programmable gate arrays; Games; Table lookup   (ID#:15-4023)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7032537&isnumber=7032472

 

Vaarandi, R.; Pihelgas, M., "Using Security Logs for Collecting and Reporting Technical Security Metrics," Military Communications Conference (MILCOM), 2014 IEEE, pp.294, 299, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.53

Abstract: During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.

Keywords: Big Data; computer network security; big data; log analysis methods; log analysis techniques; open source technology; security logs; technical security metric collection; technical security metric reporting; Correlation; Internet; Measurement; Monitoring; Peer-to-peer computing; Security; Workstations; security log analysis; security metrics   (ID#:15-4024)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956774&isnumber=6956719

 

Hyejung Moon; Hyun Suk Cho; Seo Hwa Jeong; Jangho Park, "Policy Design Based on Risk at Big Data Era: Case Study of Privacy Invasion in South Korea," Big Data (BigData Congress), 2014 IEEE International Congress on, pp.756,759, June 27 2014-July 2 2014. doi: 10.1109/BigData.Congress.2014.110

Abstract: This paper has conducted analyzing the accident case of data spill to study policy issues for ICT security from a social science perspective focusing on risk. The results from case analysis are as follows. First, ICT risk can be categorized 'severe, strong, intensive and individual' from the level of both probability and impact. Second, strategy of risk management can be designated 'avoid, transfer, mitigate, accept' by understanding their own culture type of relative group such as 'hierarchy, egalitarianism, fatalism and individualism'. Third, personal data has contained characteristics of big data such like 'volume, velocity, variety' for each risk situation. Therefore, government needs to establish a standing organization responsible for ICT risk policy and management in a new big data era. And the policy for ICT risk management needs to balance in considering 'technology, norms, laws, and market' in big data era.

 Keywords: Big Data; data privacy; risk management; Big Data characteristics; Big Data laws; Big Data market; Big Data norms; Big Data technology; ICT risk based policy design; ICT risk management; ICT security; South Korea; culture type; data spill accident case analysis; data variety; data velocity; data volume; egalitarianism group; fatalism group; hierarchy group; impact level; individual ICT risk; individualism group; intensive ICT risk; personal data; privacy invasion; probability level; risk acceptance; risk avoidance; risk mitigation; risk transfer; severe ICT risk; social science perspective; strong ICT risk; Accidents; Big data; Data privacy; Moon; Privacy; Risk management; Security; ICT policy; big data;cultural types; privacy invasion; technological risk   (ID#:15-4025)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906854&isnumber=6906742

 

Chandrasekaran, S.; Nandita, S.; Nikhil Arvind, R., "Social Network Security Management Model Using Unified Communications as a Service," Computer Applications and Information Systems (WCCAIS), 2014 World Congress on, pp. 1 ,5, 17-19 Jan. 2014. doi: 10.1109/WCCAIS.2014.6916652

Abstract: The objective of the paper is to propose a social network security management model for a multi-tenancy SaaS application using Unified Communications as a Service (UCaaS) approach. The earlier security management models do not cover the issues when data inadvertently get exposed to other users due to poor implementation of the access management processes. When a single virtual machine moves or dissolves in the network, many separate machines may bypass the security conditions that had been implemented for its neighbors which lead to vulnerability of the hosted services. When the services are multi-tenant, the issue becomes very critical due to lack of asynchronous asymmetric communications between virtual when more number of applications and users are added into the network creating big data issues and its identity. The TRAIN model for the security management using PC-FAST algorithm is proposed in order to detect and identify the communication errors between the hosted services.

Keywords: cloud computing; security of data; social networking (online);virtual machines; PC-FAST algorithm; TRAIN model ;UCaaS approach; access management processes; asynchronous asymmetric communications; communication errors detection; communication errors identification; hosted services vulnerability; multitenancy SaaS application; multitenant services; security conditions; social network security management model; unified communications as a service; virtual machine; Authentication; Communities; Servers; Social network services; Software as a service; Switches; Software as a Service; UCaaS; multi-tenancy; security management; social networks; virtual machine   (ID#:15-4026)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916652&isnumber=6916540

 

Baek, J.; Vu, Q.; Liu, J.; Huang, X.; Xiang, Y., "A Secure Cloud Computing Based Framework For Big Data Information Management Of Smart Grid," Cloud Computing, IEEE Transactions On, Vol. PP, No. 99, Pp.1, 1, 19 September 2014. Doi: 10.1109/TCC.2014.2359460

Abstract: Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call “Smart-Frame.” The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.

Keywords: Cloud computing; Computer architecture; Identity-based encryption; Information management; Smart grids   (ID#:15-4027)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6905754&isnumber=6562694

 

Silva Ferraz, F.; Guimaraes Ferraz, C.A., "Smart City Security Issues: Depicting Information Security Issues in the Role of an Urban Environment," Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th International Conference on, pp. 842, 847, 8-11 Dec. 2014. doi: 10.1109/UCC.2014.137

Abstract: For the first time in the history of humanity, more them half of the population is now living in big cities. This scenario has raised concerns related systems that provide basic services to citizens. Even more, those systems has now the responsibility to empower the citizen with information and values that may aid people on daily decisions, such as related to education, transport, healthy and others. This environment creates a set of services that, interconnected, can develop a brand new range of solutions that refers to a term often called System of Systems. In this matter, focusing in a smart city, new challenges related to information security raises, those concerns may go beyond the concept of privacy issues exploring situations where the entire environment could be affected by issues different them only break the confidentiality of a data. This paper intends to discuss and propose 9 security issues that can be part of a smart city environment, and that explores more them just citizens privacy violations.

Keywords: data privacy; security of data; smart cities; big cities; information security; privacy issues; smart city security; urban environment; Cities and towns; Cloud computing; Information security; Intelligent sensors; Servers; information security; security issues; smart city   (ID#:15-4028)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027604&isnumber=7027326

 

Majumder, A.; Majumdar, A.; Podder, T.; Kar, N.; Sharma, M., "Secure Data Communication And Cryptography Based On DNA Based Message Encoding," Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, pp. 360, 363, 8-10 May 2014. doi: 10.1109/ICACCCT.2014.7019464

Abstract: Secure data communication is the most important and essential issue in the area of message transmission over the networks. Cryptography provides the way of making secure message for confidential message transfer. Cryptography is the process of transforming the sender's message to a secret format called cipher text that only intended receiver will get understand the meaning of the secret message. There are various cryptographic or DNA based encoding algorithms have been proposed in order to make secret message for communication. But all these proposed DNA based encryption algorithms are not secure enough to provide better security as compared with the today's security requirement. In this paper, we have proposed a technique of encryption that will enhance the message security. In this proposed algorithm, a new method of DNA based encryption with a strong key of 256 bit is used. Along with this big size key various other encoding tools are used as key in the encoding process of the message like random series of DNA bases, modified DNA bases coding. Moreover a new method of round key selection is also given in this paper to provide better security in the message. The cipher text contains the extra bit of information as similar with the DNA strands that will provide better and enhanced security against intruder's attack.

Keywords: cryptography; DNA based encryption algorithm; DNA based message encoding ;cipher text; confidential message transfer; cryptography; data communication security; Cryptography; DNA; Digital audio players; Ciphertext; Coded message; DNA sequence; Encoding tools; Final Cipher   (ID#:15-4029)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019464&isnumber=7019129

 

Haoliang Lou; Yunlong Ma; Feng Zhang; Min Liu; Weiming Shen, "Data Mining For Privacy Preserving Association Rules Based On Improved MASK Algorithm," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on,  pp. 265, 270, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846853

Abstract: With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.

Keywords: data mining; data privacy; matrix algebra; query processing; DPQR algorithm; data mining; data perturbation and query restriction; data perturbation strategy; improved MASK algorithm; information privacy; inverse matrix; mining associations with secrecy constraints; privacy preserving association rules; scanning database; security issues; Algorithm design and analysis; Association rules; Data privacy; Itemsets; Time complexity; Data mining; association rules; multi-parameters perturbation; privacy preservation   (ID#:15-4030)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846853&isnumber=6846800

 

Hassan, S.; Abbas Kamboh, A.; Azam, F., "Analysis of Cloud Computing Performance, Scalability, Availability, & Security," Information Science and Applications (ICISA), 2014 International Conference on, vol., no., pp.1,5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847363

Abstract: Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.

Keywords: cloud computing; parallel processing; security of data; virtual machines; Internet; cloud computing; parallel computing; scalability; security ;virtual machine; Availability; Cloud computing; Computer hacking; Scalability   (ID#:15-4031)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847363&isnumber=6847317

 

Butun, I.; Morgera, S.D.; Sankar, R., "A Survey of Intrusion Detection Systems in Wireless Sensor Networks," Communications Surveys & Tutorials, IEEE, vol. 16, no. 1, pp. 266, 282, First Quarter 2014. doi: 10.1109/SURV.2013.050113.00191

Abstract: Wireless Sensor Networking is one of the most promising technologies that have applications ranging from health care to tactical military. Although Wireless Sensor Networks (WSNs) have appealing features (e.g., low installation cost, unattended network operation), due to the lack of a physical line of defense (i.e., there are no gateways or switches to monitor the information flow), the security of such networks is a big concern, especially for the applications where confidentiality has prime importance. Therefore, in order to operate WSNs in a secure way, any kind of intrusions should be detected before attackers can harm the network (i.e., sensor nodes) and/or information destination (i.e., data sink or base station). In this article, a survey of the state-of-the-art in Intrusion Detection Systems (IDSs) that are proposed for WSNs is presented. Firstly, detailed information about IDSs is provided. Secondly, a brief survey of IDSs proposed for Mobile Ad-Hoc Networks (MANETs) is presented and applicability of those systems to WSNs are discussed. Thirdly, IDSs proposed for WSNs are presented. This is followed by the analysis and comparison of each scheme along with their advantages and disadvantages. Finally, guidelines on IDSs that are potentially applicable to WSNs are provided. Our survey is concluded by highlighting open research issues in the field.

Keywords: mobile ad hoc networks; telecommunication security;  wireless sensor networks; IDS guidelines; MANET; intrusion detection systems; mobile ad hoc network; research issues; wireless sensor networks; Ad hoc networks; Intrusion detection; Mobile agents; Monitoring; Unified modeling language; Wireless sensor networks; IDS; MANET; WSN ;intrusion detection; mobile ad hoc network; security; wireless sensor network   (ID#:15-4032)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6517052&isnumber=6734841

 

Miyoung Jang; Min Yoon; Jae-Woo Chang, "A Privacy-Aware Query Authentication Index For Database Outsourcing," Big Data and Smart Computing (BIGCOMP), 2014 International Conference on, pp.72,76, 15-17 Jan. 2014. doi: 10.1109/BIGCOMP.2014.6741410

Abstract: Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.

Keywords: cloud computing; data integrity; data privacy; database indexing; digital signatures; outsourcing; query processing; visual databases; bucket-based data authentication methods; cloud computing ;cost reduction; data confidentiality; data maintenance; data security; data storage; database management system; database outsourcing; group signature; periodic function-based data grouping scheme; privacy-aware query authentication index; query result integrity; range query answering; service provider; spatial data distribution; spatial database; unsophisticated data grouping strategy; verification object transmission overhead; Authentication; Encryption; Indexes; Query processing; Spatial databases; Data authentication index; Database outsourcing; Encrypted database; Query result integrity   (ID#:15-4033)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6741410&isnumber=6741395

 

Gaff, Brian M.; Sussman, Heather Egan; Geetter, Jennifer, "Privacy and Big Data," Computer, vol.47, no.6, pp.7,9, June 2014. doi: 10.1109/MC.2014.161

Abstract: Big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth. The Web extra at http://youtu.be/j49eoe5g8-c is an audio recording from the Computing and the Law column, in which authors Brian M. Gaff, Heather Egan Sussman, and Jennifer Geetter discuss how big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth.

Keywords: Big data; Data integration; Data privacy; Government; Privacy; Public policy; anonymization; big data; data analysis; data collection; data retention; de-identification; privacy; security   (ID#:15-4035)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838869&isnumber=6838865

 

Sanger, J.; Richthammer, C.; Hassan, S.; Pernul, G., "Trust and Big Data: A Roadmap for Research," Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on, pp. 278, 282, 1-5 Sept. 2014. doi: 10.1109/DEXA.2014.63

Abstract: We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.

Keywords: Big Data; trusted computing; Big Data; FORSEC project; data-driven approaches; focal issues; next generation online trust; reputation-based trust management; trust-related data; Big data; Cloud computing; Computer science; Conferences; Context; Data mining; Security; Big Data; reputation; trust   (ID#:15-4036)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974862&isnumber=6974758

 

Babaie, T.; Chawla, S.; Ardon, S.; Yue Yu, "A Unified Approach To Network Anomaly Detection," Big Data (Big Data), 2014 IEEE International Conference on, pp. 650, 655, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004288

Abstract: This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.

Keywords: Big Data; computer network security; hidden Markov models; Big Data; HMM; LDS; continuous-valued data; hidden Markov model ;linear dynamical system; network anomaly detection; network traffic; Computer crime; Correlation; Hidden Markov models; IP networks; Kalman filters; Ports (Computers); Robustness   (ID#:15-4037)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004288&isnumber=7004197

 

Conghuan Ye; Zenggang Xiong; Yaoming Ding; Jiping Li; Guangwei Wang; Xuemin Zhang; Kaibing Zhang, "Secure Multimedia Big Data Sharing in Social Networks Using Fingerprinting and Encryption in the JPEG2000 Compressed Domain," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp.616,621, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.79

Abstract: With the advent of social networks and cloud computing, the amount of multimedia data produced and communicated within social networks is rapidly increasing. In the mean time, social networking platform based on cloud computing has made multimedia big data sharing in social network easier and more efficient. The growth of social multimedia, as demonstrated by social networking sites such as Facebook and YouTube, combined with advances in multimedia content analysis, underscores potential risks for malicious use such as illegal copying, piracy, plagiarism, and misappropriation. Therefore, secure multimedia sharing and traitor tracing issues have become critical and urgent in social network. In this paper, we propose a scheme for implementing the Tree-Structured Harr (TSH) transform in a homomorphic encrypted domain for fingerprinting using social network analysis with the purpose of protecting media distribution in social networks. The motivation is to map hierarchical community structure of social network into tree structure of TSH transform for JPEG2000 coding, encryption and fingerprinting. Firstly, the fingerprint code is produced using social network analysis. Secondly, the encrypted content is decomposed by the TSH transform. Thirdly, the content is fingerprinted in the TSH transform domain. At last, the encrypted and fingerprinted contents are delivered to users via hybrid multicast-unicast. The use of fingerprinting along with encryption can provide a double-layer of protection to media sharing in social networks. Theory analysis and experimental results show the effectiveness of the proposed scheme.

Keywords: Big Data; cryptography; data compression; data protection; image coding; multimedia computing; social networking (online);transforms;JPEG2000 coding;JPEG2000 compressed domain; TSH transform; fingerprint code; fingerprinting; hierarchical community structure; homomorphic encryption; hybrid multicast-unicast; media distribution protection; secure multimedia big data sharing ;social network analysis; tree-structured Harr transform; Communities; Encryption; Multimedia communication; Social network services; Transforms   (ID#:15-4038)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011303&isnumber=7011202

 

Rashad Al-Dhaqm, A.M.; Othman, S.H.; Abd Razak, S.; Ngadi, A., "Towards Adapting Metamodelling Technique For Database Forensics Investigation Domain," Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, pp.322,327, 26-27 Aug. 2014. doi: 10.1109/ISBAST.2014.7013142

Abstract: Threats which come from database insiders or database outsiders have formed a big challenge to the protection of integrity and confidentiality in many database systems. To overcome this situation a new domain called a Database Forensic (DBF) has been introduced to specifically investigate these dynamic threats which have posed many problems in Database Management Systems (DBMS) of many organizations. DBF is a process to identify, collect, preserve, analyse, reconstruct and document all digital evidences caused by this challenge. However, until today, this domain is still lacks having a standard and generic knowledge base for its forensic investigation methods / tools due to many issues and challenges in its complex processes. Therefore, this paper will reveal an approach adapted from a software engineering domain called metamodelling which will unify these DBF complex knowledge processes into an artifact, a metamodel (DBF Metamodel). In future, the DBF Metamodel could benefit many DBF investigation users such as database investigators, stockholders, and other forensic teams in offering various possible solutions for their problem domain.

Keywords: data privacy; database management systems; digital forensics; DBF complex knowledge processes; DBF metamodel; DBMS; database forensics investigation domain; database management systems; database system confidentiality protection; database system integrity protection; metamodelling technique; software engineering domain; Complexity theory; Database systems; Forensics; Organizations; Security; Servers; Database forensic ;Database forensic investigation; Metamodel; Metamodelling; Model   (ID#:15-4039)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013142&isnumber=7013076

 

Lei Xu; Chunxiao Jiang; Jian Wang; Jian Yuan; Yong Ren, "Information Security in Big Data: Privacy and Data Mining," Access, IEEE, vol.  2, no., pp.1149,1176, 2014. doi: 10.1109/ACCESS.2014.2362522

Abstract: The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.

Keywords: Big Data; data acquisition; data mining; data protection; game theory; security of data; Big Data; PPDM; data collector; data miner; data provider; data publishing decision maker; game theory ;information protection; information security ;privacy preserving data mining; Algorithm design and analysis; Computer security; Data mining; Data privacy; Game theory; Privacy; Tracking; Data mining; anonymization; anti-tracking; data mining; game theory; privacy auction; privacy-preserving data mining; privacypreserving data mining; provenance; sensitive information   (ID#:15-4040)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919256&isnumber=6705689

 

Ming Xiang; Tauch, S.; Liu, W., "Dependability and Resource Optimation Analysis for Smart Grid Communication Networks," Big Data and Cloud Computing (BdCloud), 2014 IEEE Fourth International Conference on , vol., no., pp.676,681, 3-5 Dec. 2014. doi: 10.1109/BDCloud.2014.115

Abstract: Smart Grid is the trend of next generation power distribution and network management that enable a two -- way interactive communication and operation between consumers and suppliers, so as to achieve intelligent resource management and optimization. The wireless mesh network technology is a promising infrastructure solution to support these smart functionalities, while it has some inherent vulnerabilities and cyber-attack risks to be addressed. As Smart Grid is heavily relying on the underlie communication networks, which makes their security and dependability issues critical to the entire smart grid technology. Several studies have been conducted in the field of Smart Grid security, but few works were focused on the dependability and its associated resource analysis of the control center networks. In this paper, we have investigated the dependability modeling and also resource allocation in redundant communication networks by adopting two mathematical approaches, Reliability Block Diagrams (RBD) and Stochastic Petri Nets (SPNs), to analyze the dependability of control center networks in Smart Grid environment. We have applied our proposed modeling approach in an extensive case study to evaluate the availability of smart gird networks with different redundancy mechanisms. A combination of dependability models and reliability importance are used to analyze the network availability according to the most important components. We also show the variation of network availability in accordance with Mean Time to Failure (MTTF) in different network architectures.

Keywords: Petri nets; power distribution reliability; power system security; redundancy; resource allocation; smart power grids; stochastic programming; telecommunication network reliability; telecommunication security; wireless mesh networks; MTTF; RBD; SPN; cyber-attack risk; dependability modeling; intelligent resource management; mean time to failure; network management; next generation power distribution; redundancy mechanism; reliability block diagrams; resource allocation; resource resource optimization analysis; smart grid communication network reliability; smart grid security; stochastic Petri net; two-way interactive communication; underlie communication network; wireless mesh network technology; Availability; Computer architecture; Logic gates; Markov processes; Smart grids; Topology; Smart Grid; availability; dependability analysis; reliability importance; resource allocation; stochastic petri nets; wireless mesh network   (ID#:15-4041)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034859&isnumber=7034739

 

Bertino, E.; Samanthula, B.K., "Security With Privacy - A Research Agenda," Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), 2014 International Conference on, pp. 144, 153, 22-25 Oct. 2014. Doi: (not provided)

Abstract: Data is one of the most valuable assets for organization. It can facilitate users or organizations to meet their diverse goals, ranging from scientific advances to business intelligence. Due to the tremendous growth of data, the notion of big data has certainly gained momentum in recent years. Cloud computing is a key technology for storing, managing and analyzing big data. However, such large, complex, and growing data, typically collected from various data sources, such as sensors and social media, can often contain personally identifiable information (PII) and thus the organizations collecting the big data may want to protect their outsourced data from the cloud. In this paper, we survey our research towards development of efficient and effective privacy-enhancing (PE) techniques for management and analysis of big data in cloud computing. We propose our initial approaches to address two important PE applications: (i) privacy-preserving data management and (ii) privacy-preserving data analysis under the cloud environment. Additionally, we point out research issues that still need to be addressed to develop comprehensive solutions to the problem of effective and efficient privacy-preserving use of data.

Keywords: Big Data; cloud computing; data privacy; security of data; PE applications; PE techniques; PII; big data analysis; business intelligence; cloud computing; cloud environment; data sources; outsourced data; personally identifiable information; privacy-enhancing techniques; privacy-preserving data analysis; privacy-preserving data management; research agenda; security; social media; Big data; Cancer; Electronic mail; Encryption; Media; Privacy   (ID#:15-4042)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014559&isnumber=7011734

 

Miyachi, T.; Yamada, T., "Current Issues and Challenges on Cyber Security For Industrial Automation And Control Systems," SICE Annual Conference (SICE), 2014 Proceedings of the, pp. 821, 826, 9-12 Sept. 2014. doi: 10.1109/SICE.2014.6935227

Abstract: This paper presents a survey on cyber security issues in in current industrial automation and control systems, which also includes observations and insights collected and distilled through a series of discussion by some of major Japanese experts in this field. It also tries to provide a conceptual framework of those issues and big pictures of some ongoing projects to try to enhance it.

Keywords: industrial control; production engineering computing; security of data; IACS; cyber security; industrial automation and control systems; Control systems; IEC standards; Malware; Protocols; Cyber incident; cyber threat; security; vulnerability   (ID#:15-4043)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6935227&isnumber=6935176

 

Eckhoff, D.; Sommer, C., "Driving for Big Data? Privacy Concerns in Vehicular Networking," Security & Privacy, IEEE, vol. 12, no. 1, pp. 77, 79, Jan.-Feb. 2014. doi: 10.1109/MSP.2014.2

Abstract: Communicating vehicles will change road traffic as we know it. With current versions of European and US standards in mind, the authors discuss privacy and traffic surveillance issues in vehicular network technology and outline research directions that could address these issues.

Keywords: automobiles; data privacy; road traffic; surveillance; telecommunication standards; telecommunication traffic; vehicular ad hoc networks; European standards; US standards; communicating vehicles; privacy concerns; road traffic change; traffic surveillance; vehicular network technology; vehicular networking; Intelligent vehicles; Road traffic; Safety; Surveillance; Telecommunication standards; Wireless communication; ETSI; ITS; WAVE; intelligent transport system; vehicular network   (ID#:15-4044)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6756784&isnumber=6756734


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Clean Slate (2014 Year in Review)

 

 
SoS Newsletter Logo

Clean Slate
(2014 Year in Review)

 

The "clean slate" approach looks at designing networks and internets from scratch, with security built in, in contrast to the evolved Internet in place. The research presented here covers a range of research topics, and includes a survey of those topics. These works were published or presented in  2014.

 

Yamanaka, H.; Kawai, E.; Ishii, S.; Shimojo, S., "AutoVFlow: Autonomous Virtualization for Wide-Area OpenFlow Networks," Software Defined Networks (EWSDN), 2014 Third European Workshop on, pp.67,72, 1-3 Sept. 2014. doi: 10.1109/EWSDN.2014.28 It is expected that clean-slate network designs will be implemented for wide-area network applications. Multi-tenancy in OpenFlow networks is an effective method to supporting a clean-slate network design, because the cost-effectiveness is improved by the sharing of substrate networks. To guarantee the programmability of OpenFlow for tenants, a complete flow space (i.e., header values of the data packets) virtualization is necessary. Wide-area substrate networks typically have multiple administrators. We therefore need to implement a flow space virtualization over multiple administration networks. In existing techniques, a third party is solely responsible for managing the mapping of header values for flow space virtualization for substrate network administrators and tenants, despite the severity of a third party failure. In this paper, we propose an AutoVFlow mechanism that allows flow space virtualization in a wide-area networks without the need for a third party. Substrate network administrators implement a flow space virtualization autonomously. They are responsible for virtualizing a flow space involving switches in their own substrate networks. Using a prototype of AutoVFlow, we measured the virtualization overhead, the results of which show a negligible amount of overhead.

 Keywords: virtualisation; wide area networks; AutoVFlow mechanism; autonomous virtualization; clean-slate network design; flow space virtualization; substrate network; wide-area OpenFlow networks; wide-area network applications; Aerospace electronics; Control systems; Delays; Ports (Computers);Substrates; Virtualization; clean-slate network design; flow space; virtualization; wide-area network (ID#: 15-3877) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984054&isnumber=6984033

 

Silva, F.; Castillo-Lema, J.; Neto, A.; Silva, F.; Rosa, P.; Corujo, D.; Guimaraes, C.; Aguiar, R., "Entity Title Architecture Extensions Towards Advanced Quality-Oriented Mobility Control Capabilities," Computers and Communication (ISCC), 2014 IEEE Symposium on, pp.1,6, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912459 The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.

Keywords: Internet; mobile computing; mobile handsets; mobility management (mobile radio);multimedia communication; quality of service; ETArch;FI; MATLAB; PoA; PoA candidates; active session flows; advanced quality-oriented mobility control capabilities; beyond-IP networking services; distinct applications scenario; entity title architecture extensions; future Internet clean slate approach; handover setup; information-centric entity title architecture; mobile devices; mobile multimedia networking; networking functions; point of attachment decision; quality of service; session quality requirements; wireless communication systems; wireless quality conditions; Delays; Handover; Manganese; Quality of service; Streaming media; Wireless communication (ID#: 15-3878) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912459&isnumber=6912451

 

SoURLas, V.; Tassiulas, L., "Replication Management And Cache-Aware Routing In Information-Centric Networks," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp. 1,7, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838282 Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.

Keywords: Internet; telecommunication network routing; telecommunication traffic; Internet; cache management; cache-aware routing; clean-slate technologies; content distribution; control plane; end-to-end communication system; host-to-host Internet; information-centric approaches; information-centric networks; Information-centric resources; information-centric solutions; information-centrism; missing management; native distribution network; performance management; replication management; route management; traffic engineering approaches; Computer architecture; Network topology; Planning; Routing; Servers; Subscriptions; Transportation (ID#: 15-3879) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838282&isnumber=6838210

 

Visala, K.; Keating, A.; Khan, R.H., "Models and Tools For The High-Level Simulation Of A Name-Based Interdomain Routing Architecture," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.55,60, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849168 The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.

Keywords: Internet; telecommunication network routing; telecommunication network topology; telecommunication traffic; aggregate models; clean slate name-based interdomain routing architecture; complex-dynamic behavior; global network architecture deployment; global network architecture operation; high-level simulation methodology; modular design; packet-level models; reusable deployment model; reusable topology model; reusable traffic model; sensitivity analysis; Aggregates; Approximation methods; Internet; Network topology; Peer-to-peer computing; Routing; Topology (ID#: 15-3880) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849168&isnumber=6849127

 

Bronzino, F.; Chao Han; Yang Chen; Nagaraja, K.; Xiaowei Yang; Seskar, I.; Raychaudhuri, D., "In-Network Compute Extensions for Rate-Adaptive Content Delivery in Mobile Networks," Network Protocols (ICNP), 2014 IEEE 22nd International Conference on, pp.511,517, 21-24 Oct. 2014. doi: 10.1109/ICNP.2014.81 Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.

Keywords: mobile computing; mobility management (mobile radio);protocols; video streaming; GENI test bed; Internet; Mobility First project; at-scale mobility; clean slate enhancements; compute-layer extensions; context-awareness; data plane; en-route service-adaptation; in-network services; mobile devices; mobile end-user experience; mobile wireless networks; network protocols; non-IP protocols; offloading work; pluggable compute-layer services; programmable nodes; protocol extensions; rate adaptation; rate-adaptive content delivery; service providers; service-API extensions; trustworthiness; video streams; Bit rate; Computer architecture; Mobile communication; Mobile computing; Protocols; Servers; Streaming media; Internet architecture; cloud; in-network computing; mobility; rate adaptation; video streaming; video transcoding (ID#: 15-3881) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980420&isnumber=6980338

 

Lopes Alcantara Batista, B.; Lima de Campos, G.A.; Fernandez, M.P., "Flow-based Conflict Detection in OpenFlow Networks Using First-Order Logic," Computers and Communication (ISCC), 2014 IEEE Symposium on, pp. 1, 6, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912577 The OpenFlow architecture is a proposal from the Clean Slate initiative to define a new Internet architecture where the network devices are simple, and the control and management plane is performed by a centralized controller. The simplicity and centralization architecture makes it reliable and inexpensive. However, this architecture does not provide mechanisms to detect conflicting in flows, allowing that unreachable flows can be configured in the network elements, and the network may not behave as expected. This paper proposes an approach to conflict detection using first-order logic to define possible antagonisms and employ an inference engine to detect conflicting flows before the OpenFlow controller implement in the network elements.

Keywords: IP networks; computer network management; inference mechanisms; transport protocols; Clean Slate initiative; Internet architecture; OpenFlow controller; OpenFlow network architecture; centralization architecture; centralized controller; control plane; first-order logic; flow-based conflict detection; inference engine; management plane; network devices; network elements; unreachable flows; Control systems; IP networks; Indexes; Knowledge based systems; Media Access Protocol; Proposals (ID#: 15-3882) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912577&isnumber=6912451

 

Coras, F.; Saucez, D.; Iannone, L.; Donnet, B., "On the Performance Of The LISP Beta Network," Networking Conference, 2014 IFIP, pp.1,9, 2-4 June 2014. doi: 10.1109/IFIPNetworking.2014.6857102 The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.

Keywords: IP networks; Internet; computer network reliability; internetworking; routing protocols; BGP churn reduction; BGP routing policy; BGP table size reduction; European infrastructure; IP address; Internet; LISP Beta Network reliable performance; LISP interworking infrastructure; USA site; control plane; interworking mechanism; locator-identifier separation protocol; median mapping resolution delay; Databases; Delays; Europe; IP networks; Internet; Routing; Routing protocols (ID#: 15-3883) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6857102&isnumber=6857077

 

Riggio, R.; De Pellegrini, F.; Siracusa, D., "The Price Of Virtualization: Performance Isolation In Multi-Tenants Networks," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp. 1, 7, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838386 Network virtualization sits firmly on the Internet evolutionary path allowing researchers to experiment with novel clean-slate designs over the production network and practitioners to manage multi-tenants infrastructures in a flexible and scalable manner. In such scenarios, isolation between virtual networks is often intended as purely logical: this is the case of address space isolation or flow space isolation. This approach neglects the effect that network virtualization has on resource allocation network-wide. In this work we investigate the price paid by a purely logical approach in terms of performance degradation. This performance loss is paid by the actual users of a multi-tenants datacenter network. We propose a solution to this problem leveraging on a new network virtualization primitive, namely an online link utilization feedback mechanism. It provides each tenant with the necessary information to make efficient use of network resources. We evaluate our solution trough a real implementation exploiting the OpenFlow protocol. Empirical results confirm that the proposed scheme is able to support tenants in exploiting virtualized network resources effectively.

Keywords: Internet; virtualisation; Internet evolutionary path; OpenFlow protocol; address space isolation; flow space isolation; multitenants datacenter network; network virtualization; online link utilization feedback mechanism; resource allocation network; Bandwidth; Labeling; Ports (Computers);Resource management; Servers; Substrates; Virtualization (ID#: 15-3884) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838386&isnumber=6838210

 

Qadir, J.; Hasan, O., "Applying Formal Methods to Networking: Theory, Techniques and Applications," Communications Surveys & Tutorials, IEEE, vol. PP, no. 99, pp.1, 1; 07 August 2014. doi: 10.1109/COMST.2014.2345792 Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design—especially, the software defined networking (SDN) paradigm—offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.

Keywords: Communities; Computers; Internet; Mathematics; Protocols; Software; Tutorials (ID#: 15-3886) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873212&isnumber=5451756

 

Everspaugh, A.; Yan Zhai; Jellinek, R.; Ristenpart, T.; Swift, M., "Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG," Security and Privacy (SP), 2014 IEEE Symposium on, pp.559, 574, 18-21 May 2014. doi: 10.1109/SP.2014.42 Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.

 Keywords: Linux; virtual machines; Linux RNG systems; VM boots; VM snapshots; Whirlwind RNG; cryptographic secrets; entropy sources; not-so-random numbers; software-based random number generators; virtual machine; virtualized Linux; virtualized environments; Cryptography; Entropy; Hardware; Instruments; Kernel; Linux; random number generator; virtualization (ID#: 15-3887) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956587&isnumber=6956545

 

Petullo, W.M.; Wenyuan Fei; Solworth, J.A.; Gavlin, P., "Ethos' Deeply Integrated Distributed Types," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 167, 180, 17-18 May 2014. doi: 10.1109/SPW.2014.32 Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.

Keywords: operating systems (computers) ;program debugging; programming languages; safety-critical software; trusted computing; Ethos operating system; deeply integrated distributed types; language runtime; multilingual Ethos; operating system approach; programming languages; runtime-agnostic Ethos; security-sensitive bugs; type constraints; type safety; Kernel; Protocols; Robustness; Runtime; Safety; Security; Semantics; Operating system; language-theoretic security; type system (ID#: 15-3888) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957300&isnumber=6957265

 

Manandhar, K.; Adcock, B.; Xiaojun Cao, "Preserving the Anonymity in MobilityFirst Networks," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp. 1, 6, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911810 A scheme for preserving privacy in MobilityFirst (MF) clean-slate future Internet architecture is proposed in this paper. The proposed scheme, called Anonymity in MobilityFirst (AMF), utilizes the three-tiered approach to effectively exploit the inherent properties of MF Network such as Globally Unique Flat Identifier (GUID) and Global Name Resolution Service (GNRS) to provide anonymity to the users. While employing new proposed schemes in exchanging of keys between different tiers of routers to alleviate trust issues, the proposed scheme uses multiple routers in each tier to avoid collaboration amongst the routers in the three tiers to expose the end users.

Keywords: Internet; authorisation; data privacy; mobile computing; telecommunication network routing; telecommunication security; trusted computing; AMF; GNRS; GUID; MF networks; MobilityFirst networks; anonymity in MobilityFirst; anonymity preservation; future Internet architecture; global name resolution service; globally unique flat identifier; privacy preservation; routers; three-tiered approach; trust issues; user anonymity; Computer science; Internet; Privacy; Public key; Routing; Routing protocols (ID#: 15-3889) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911810&isnumber=6911704

 

Di Renzo, M.; Haas, H.; Ghrayeb, A.; Sugiura, S.; Hanzo, L., "Spatial Modulation for Generalized MIMO: Challenges, Opportunities, and Implementation," Proceedings of the IEEE, vol. 102, no. 1, pp.56, 103, Jan. 2014. doi: 10.1109/JPROC.2013.2287851 A key challenge of future mobile communication research is to strike an attractive compromise between wireless network's area spectral efficiency and energy efficiency. This necessitates a clean-slate approach to wireless system design, embracing the rich body of existing knowledge, especially on multiple-input-multiple-ouput (MIMO) technologies. This motivates the proposal of an emerging wireless communications concept conceived for single-radio-frequency (RF) large-scale MIMO communications, which is termed as SM. The concept of SM has established itself as a beneficial transmission paradigm, subsuming numerous members of the MIMO system family. The research of SM has reached sufficient maturity to motivate its comparison to state-of-the-art MIMO communications, as well as to inspire its application to other emerging wireless systems such as relay-aided, cooperative, small-cell, optical wireless, and power-efficient communications. Furthermore, it has received sufficient research attention to be implemented in testbeds, and it holds the promise of stimulating further vigorous interdisciplinary research in the years to come. This tutorial paper is intended to offer a comprehensive state-of-the-art survey on SM-MIMO research, to provide a critical appraisal of its potential advantages, and to promote the discussion of its beneficial application areas and their research challenges leading to the analysis of the technological issues associated with the implementation of SM-MIMO. The paper is concluded with the description of the world's first experimental activities in this vibrant research field.

Keywords: MIMO communication; cellular radio; energy conservation; modulation; next generation networks; MIMO system; SM-MIMO research; area spectral efficiency; beneficial transmission paradigm; cooperative communications ;energy efficiency; generalized MIMO; generalized multiple-input-multiple-output technologies; mobile communication research; mobile data traffic; next-generation cellular networks; optical wireless communications; power-efficient communications; relay-aided communications; single-radio-frequency large-scale MIMO communications; spatial modulation; wireless network; wireless system design; MIMO; Modulation; Spatial resolution; Tutorials; Green and sustainable wireless communications; heterogenous cellular networks; large-scale multiantenna systems; multiantenna wireless systems; network-coded cooperative wireless networks; relay-aided wireless communications; single-radio-frequency (RF) multiantenna systems; spatial modulation; testbed implementation; visible light communications} (ID#: 15-3890) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6678765&isnumber=6685843

 

Mohamed, Abdelrahim; Onireti, Oluwakayode; Qi, Yinan; Imran, Ali; Imran, Muhammed; Tafazolli, Rahim, "Physical Layer Frame in Signalling-Data Separation Architecture: Overhead and Performance Evaluation," European Wireless 2014; 20th European Wireless Conference; Proceedings of , vol., no., pp.1,6, 14-16 May 2014

Abstract: Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.

Keywords:  (not provided) (ID#: 15-3891) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843062&isnumber=6843048

 

Khojastepour, M.A.; Aryafar, E.; Sundaresan, K.; Mahindra, R.; Rangarajan, S., "Exploring the Potential For Full-Duplex In Legacy LTE Systems," Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on, pp.10,18, June 30 2014-July 3 2014. doi: 10.1109/SAHCN.2014.6990322 With the growing demand for increased spectral efficiencies, there has been renewed interest in enabling full-duplex communications. However, existing approaches to enable full-duplex require a clean-slate approach to address the key challenge in full-duplex, namely self-interference suppression. This serves as a big deterrent to enabling full-duplex in existing cellular networks. Towards our vision of enabling full-duplex in legacy cellular, specifically LTE networks, with no modifications to existing hardware at BS and client as well as technology specific industry standards, we present the design of our experimental system FD-LTE, that incorporates a combination of passive SI cancellation schemes, with legacy LTE half-duplex BS and client devices. We build a prototype of FD-LTE, integrate it with LTE's evolved packet core and conduct over-the-air experiments to explore the feasibility and potential for full-duplex with legacy LTE networks. We report promising experimental results from FD-LTE, which currently applies to scenarios with limited ranges that is typical of small cells.

Keywords: Long Term Evolution; interference suppression; radio spectrum management; FD-LTE; LTE systems; cellular networks; full-duplex communications; passive SI cancellation schemes; self-interference suppression; spectral efficiency; Base stations; Downlink; Frequency conversion; Long Term Evolution; Receiving antennas; Silicon; Uplink (ID#: 15-3892)  

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990322&isnumber=6990316

 

Tombaz, S.; Sang-wook Han; Ki Won Sung; Zander, J., "Energy Efficient Network Deployment With Cell DTX," Communications Letters, IEEE, vol.18, no. 6, pp. 977, 980, June 2014. doi: 10.1109/LCOMM.2014.2323960 Cell discontinuous transmission (DTX) is a new feature that enables sleep mode operations at base station (BS) side during the transmission time intervals when there is no traffic. In this letter, we analyze the maximum achievable energy saving of the cell DTX. We incorporate the cell DTX with a clean-slate network deployment and obtain optimal BS density for lowest energy consumption satisfying a certain quality of service requirement considering daily traffic variation. The numerical result indicates that the fast traffic adaptation capability of cell DTX favors dense network deployment with lightly loaded cells, which brings about considerable improvement in energy saving.

Keywords: cellular radio; telecommunication power management ;base station; cell DTX; cell discontinuous transmission; energy efficient network deployment; maximum achievable energy saving; quality of service; sleep mode operations; traffic variation; Energy consumption; Interference; Load modeling; Planning; Power demand; Quality of service; Vectors; Energy efficiency; cell DTX; cell load; network deployment; traffic profile (ID#: 15-3893) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815652&isnumber=6827269

 

Gregr, M.; Veda, M., "Challenges with Transition and User Accounting in Next Generation Networks," Network Protocols (ICNP), 2014 IEEE 22nd International Conference on, pp. 501, 503, 21-24 Oct. 2014. doi: 10.1109/ICNP.2014.79 Future networks may change the way how network administrators monitor and account their users. History shows that usually a completely new design (clean slate) is used to propose a new network architecture - e.g. Network Control Protocol to TCP/IP, IPv4 to IPv6 or IP to Recursive Inter Network Architecture. The incompatibility between these architectures changes the user accounting process as network administrators have to use different information to identify a user. The paper presents a methodology how it is possible to gather all necessary information needed for smooth transition between two incompatible architectures. The transition from IPv4 and IPv6 is used as a use case, but it should be able to use the same process with any new networking architecture.

Keywords: IP networks; next generation networks; protocols; IPv4; IPv6; TCP/IP; network administrators; network control protocol; next generation networks; recursive inter network architecture; user accounting; Hardware; IP networks; Internet; Monitoring; Organizations; Probes; Protocols (ID#: 15-3894) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980418&isnumber=6980338


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Cross Layer Security (2014 Year in Review)

 

 

 
SoS Newsletter Logo

Cross Layer Security
(2014 Year in Review)

 

Protocol architectures traditionally followed strict layering principles to ensure interoperability, rapid deployment, and efficient implementation. But a lack of coordination between layers limits the performance of these architectures. More important, the lack of coordination may introduce security vulnerabilities and potential threat vectors. The literature cited here addresses the problems and opportunities available for cross layer security.  All were published in 2014. 

 

Farag, M.M.; Azab, M.; Mokhtar, B., "Cross-layer Security Framework For Smart Grid: Physical Security Layer," Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), 2014 IEEE PES, pp.1,7, 12-15 Oct. 2014. doi: 10.1109/ISGTEurope.2014.7028963 Security is a major challenge preventing wide deployment of the smart grid technology. Typically, the classical power grid is protected with a set of isolated security tools applied to individual grid components and layers ignoring their cross-layer interaction. Such an approach does not address the smart grid security requirements because usually intricate attacks are cross-layer exploiting multiple vulnerabilities at various grid layers and domains. We advance a conceptual layering model of the smart grid and a high-level overview of a security framework, termed CyNetPhy, towards enabling cross-layer security of the smart grid. CyNetPhy tightly integrates and coordinates between three interrelated, and highly cooperative real-time security systems crossing section various layers of the grid cyber and physical domains to simultaneously address the grid's operational and security requirements. In this article, we present in detail the physical security layer (PSL) in CyNetPhy. We describe an attack scenario raising the emerging hardware Trojan threat in process control systems (PCSes) and its novel PSL resolution leveraging the model predictive control principles. Initial simulation results illustrate the feasibility and effectiveness of the PSL.

Keywords: power system security; predictive control; process control; smart power grids; CyNetPhy PSL resolution; PCS; conceptual layering model; cooperative real-time security system; cross-layer security framework; hardware Trojan threat; isolated security tool; physical security layer; predictive control; process control system; smart power grid cyber technology; Control systems; Hardware; Hidden Markov models; Monitoring; Smart grids; Trojan horses; Cross-Layer Security; Physical Layer Security; Process Control Security; Smart Grid; Smart Grid Security  (ID#: 15-3839)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028963&isnumber=7028730

 

Jie Tang; Huan Huan Song; Fei Pan; Hong Wen; Bin Wu; Yixin Jiang; Xiaobin Guo; Zhen Chen, "A MIMO Cross-Layer Precoding Security Communication System," Communications and Network Security (CNS), 2014 IEEE Conference on, pp.500,501, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997524 This paper proposed a MIMO cross-layer precoding secure communications via pattern controlled by higher layer cryptography. By contrast to physical layer security system, the proposed scheme could enhance the security in adverse situations where the physical layer security hardly to be deal with. Two One typical situation is considered. One is that the attackers have the ideal CSI and another is eavesdropper's channel are highly correlated to legitimate channel. Our scheme integrates the upper layer with physical layer secure together to gaurantee the security in real communication system. Extensive theoretical analysis and simulations are conducted to demonstrate its effectiveness. The proposed method is feasible to spread in many other communicate scenarios.

Keywords: MIMO communication; cryptography; precoding; telecommunication security; CSI;MIMO cross-layer precoding secure communications; MIMO cross-layer precoding security communication system; eavesdropper's channel; higher layer cryptography; physical layer security system; upper layer; Bit error rate; Educational institutions; MIMO; Modulation; Physical layer; Security; MIMO; physical layer security cross-layer security; precoding; random array  (ID#: 15-3840)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997524&isnumber=6997445

 

Lixing Song; Shaoen Wu, "Cross-layer Wireless Information Security," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp.1,9, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911744 Wireless information security generates shared secret keys from reciprocal channel dynamics. Current solutions are mostly based on temporal per-frame channel measurements of signal strength and suffer from low key generate rate (KGR), large budget in channel probing, and poor secrecy if a channel does not temporally vary significantly. This paper designs a cross-layer solution that measures noise-free per-symbol channel dynamics across both time and frequency domain and derives keys from the highly fine-grained per-symbol reciprocal channel measurements. This solution consists of merits that: (1) the persymbol granularity improves the volume of available uncorrelated channel measurements by orders of magnitude over per-frame granularity in conventional solutions and so does KGR; 2) the solution exploits subtle channel fluctuations in frequency domain that does not force users to move to incur enough temporal variations as conventional solutions require; and (3) it measures noise-free channel response that suppresses key bit disagreement between trusted users. As a result, in every aspect, the proposed solution improves the security performance by orders of magnitude over conventional solutions. The performance has been evaluated on both a GNU SDR testbed in practice and a local GNU Radio simulator. The cross-layer solution can generate a KGR of 24.07 bits per probing frame on testbed or 19 bits in simulation, although conventional optimal solutions only has a KGR of at most one or two bit per probing frame. It also has a low key bit disagreement ratio while maintaining a high entropy rate. The derived keys show strong independence with correlation coefficients mostly less than 0.05. Furthermore, it is empirically shown that any slight physical change, e.g. a small rotation of antenna, results in fundamentally different cross-layer frequency measurements, which implies the strong secrecy and high efficiency of the proposed solution.

Keywords: cryptography; entropy; telecommunication security; wireless channels; GNU SDR testbed; GNU radio simulator; KGR; antenna rotation; bit per probing frame; channel probing; correlation coefficients; cross-layer wireless information security; fine-grained per-symbol reciprocal channel measurements; frequency domain; high entropy rate; key generate rate; low key bit disagreement ratio; noise-free channel response; noise-free per-symbol channel; per-frame channel measurements; poor secrecy; reciprocal channel dynamics; security performance; signal strength; subtle channel fluctuations; uncorrelated channel measurement volume; Communication system security; Frequency measurement; Information security; Noise measurement; OFDM; Pollution measurement; Wireless communication  (ID#: 15-3841)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911744&isnumber=6911704

 

Yongle Hao; Yizhen Jia; Baojiang Cui; Wei Xin; Dehu Meng, "OpenSSL HeartBleed: Security Management of Implements of Basic Protocols," P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, pp.520,524, 8-10 Nov. 2014. doi: 10.1109/3PGCIC.2014.148 With the rapid development of information technology, information security management is ever more important. OpenSSL security incident told us, there's distinct disadvantages of security management of current hierarchical structure, the software and hardware facilities are necessary to enforce security management on their implements of crucial basic protocols, in order to ease the security threats against the facilities in a certain extent. This article expounded cross-layer security management and enumerated 5 contributory factors for the core problems that management facing to.

Keywords: cryptographic protocols; OpenSSL HeartBleed; OpenSSL security; cross-layer security management; hardware facilities; hierarchical structure; information security management; information technology; protocols; secure socket layer; security threats; software facilities; Computers; Hardware; Heart beat; Information security; Protocols; Software  (ID#: 15-3842)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024639&isnumber=7024297

 

Mahmood, A.; Akbar, A.H., "Threats in End To End Commercial Deployments of Wireless Sensor Networks and Their Cross Layer Solution," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp. 15, 22, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861325 Commercial Wireless Sensor Networks (WSNs) can be accessed through sensor web portals. However, associated security implications and threats to the 1) users/subscribers 2) investors and 3) third party operators regarding sensor web portals are not seen in completeness, rather the contemporary work handles them in parts. In this paper, we discuss different kind of security attacks and vulnerabilities at different layers to the users, investors including Wireless Sensor Network Service Providers (WSNSPs) and WSN itself in relation with the two well-known documents i.e., “Department of Homeland Security” (DHS) and “Department of Defense (DOD)”, as these are standard security documents till date. Further we propose a comprehensive cross layer security solution in the light of guidelines given in the aforementioned documents that is minimalist in implementation and achieves the purported security goals.

Keywords: telecommunication security; wireless sensor networks; Department of Defense; Department of Homeland Security; WSNSP; cross layer security solution; cross layer solution; end to end commercial deployments; security attacks; security goals; sensor web portals; standard security documents; wireless sensor network service providers; Availability; Mobile communication; Portals; Security; Web servers; Wireless sensor networks; Wireless sensor network; attacks; commercial; security; sensor portal; threats; web services  (ID#: 15-3843)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861325&isnumber=6861314

 

Datta, E.; Goyal, N., "Security Attack Mitigation Framework For The Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,6, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798457 Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow - hem to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.

Keywords: Markov processes; cloud computing; security of data; virtualisation; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security; Silicon; Virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration  (ID#: 15-3844)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433

 

Rieke, R.; Repp, J.; Zhdanova, M.; Eichler, J., "Monitoring Security Compliance of Critical Processes," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp.552,560, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.106 Enforcing security in process-aware information systems at runtime requires the monitoring of systems' operation using process information. Analysis of this information with respect to security and compliance aspects is growing in complexity with the increase in functionality, connectivity, and dynamics of process evolution. To tackle this complexity, the application of models is becoming standard practice. Considering today's frequent changes to processes, model-based support for security and compliance analysis is not only needed in pre-operational phases but also at runtime. This paper presents an approach to support evaluation of the security status of processes at runtime. The approach is based on operational formal models derived from process specifications and security policies comprising technical, organizational, regulatory and cross-layer aspects. A process behavior model is synchronized by events from the running process and utilizes prediction of expected close-future states to find possible security violations and allow early decisions on countermeasures. The applicability of the approach is exemplified by a misuse case scenario from a hydroelectric power plant.

 Keywords: {hydroelectric power stations; power system security; critical processes; hydroelectric power plant; model-based support; operational formal models; process behavior model; process specifications; process-aware information systems; security compliance; security policies; Automata; Business; Computational modeling; Monitoring; Predictive models; Runtime; Security; critical infrastructures; predictive security analysis; process behavior analysis; security information and event management; security modeling and simulation; security monitoring  (ID#: 15-3845)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787328&isnumber=6787236

 

Wen, H.; Tang, J.; Wu, J.; Song, H.; Wu, T.; Wu, B.; Ho, P.; Lv, S.; Sun, L., "A Cross-layer Secure Communication Model Based on Discrete Fractional Fourier Fransform (DFRFT)," Emerging Topics in Computing, IEEE Transactions on, vol. PP, no. 99, pp.1,1, 06 November 2014. doi: 10.1109/TETC.2014.2367415 Discrete fractional Fourier transform (DFRFT) is a generalization of discrete Fourier transform. There are a number of DFRFT proposals, which are useful for various signal processing applications. This paper nvestigates practical solutions toward the construction of unconditionally secure communication systems based on DFRFT via crosslayer approach. By introducing a distort signal parameter, the sender randomly flip-flops between the distort signal parameter and the general signal parameter to confuse the attacker. The advantages of the legitimate partners are guaranteed. We extend the advantages between legitimate partners via developing novel security codes on top of the proposed cross-layer DFRFT security communication model, aiming to achieve an error-free legitimate channel while preventing the eavesdropper from any useful information. Thus, a cross-layer strong mobile communication secure model is built.

Keywords: Constellation diagram; Discrete Fourier transforms; Distortion; Flip-flops; OFDM; Security; DFRFT; Physical layer security; crosslayer; security code  (ID#: 15-3846)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949146&isnumber=6558478

 

Sabaliauskaite, G.; Mathur, A.P., "Countermeasures to Enhance Cyber-physical System Security and Safety," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.13, 18, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.6 An application of two Cyber-Physical System (CPS) security countermeasures - Intelligent Checker (IC) and Cross-correlator - for enhancing CPS safety and achieving required CPS safety integrity level is presented. ICs are smart sensors aimed at detecting attacks in CPS and alerting the human operators. Cross-correlator is an anomaly detection technique for detecting deception attacks. We show how ICs could be implemented at three different CPS safety protection layers to maintain CPS in a safe state. In addition, we combine ICs with the cross-correlator technique to assure high probability of failure detection. Performance simulations show that a combination of these two security countermeasures is effective in detecting and mitigating CPS failures, including catastrophic failures.

Keywords: data integrity; fault diagnosis; security of data; CPS failure detection; CPS failure mitigation; CPS safety integrity level; CPS safety protection layers; CPS security countermeasures; IC; anomaly detection technique; catastrophic failures; cross-correlator; cyber-physical system safety; cyber-physical system security; deception attack detection; intelligent checker; smart sensors; Integrated circuits; Process control; Robot sensing systems; Safety; Security; ISA-84; cross-correlator; cyber-attacks; cyber-physical systems; intelligent checkers; safety; safety instrumented systems; security  (ID#: 15-3846)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903098&isnumber=6903069

 

Syrivelis, Dimitris; Paschos, Georgios S.; Tassiulas, Leandros, "VirtueMAN: A Software-Defined Network Architecture For Wifi-Based Metropolitan Applications," Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2014 IEEE 19th International Workshop on, pp.95,99, 1-3 Dec. 2014. doi: 10.1109/CAMAD.2014.7033213 Metropolitan scale WiFi deployments face several challenges including controllability and management, which prohibit the provision of Seamless Access, Quality of Service (QoS) and Security to mobile users. Thus, they remain largely an untapped networking resource. In this work, a SDN-based network architecture is proposed; it is comprised of a distributed network-wide controller and a novel datapath for wireless access points. Virtualization of network functions is employed for configurable user access control as well as for supporting an IP-independent forwarding scheme. The proposed architecture is a flat network across the deployment area, providing seamless connectivity and reachability without the need of intermediary servers over the Internet, enabling thus a wide variety of localized applications, like for instance video surveillance. Also, the provided interface allows for transparent implementation of intra-network distributed cross-layer traffic control protocols that can optimize the multihop performance of the wireless network.

Keywords: Authentication; Heart beat; IEEE 802.11 Standards; Internet; Mobile communication; Protocols; Quality of service  (ID#: 15-3847)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033213&isnumber=7033190

 

Ponti, C.; Pajewski, L.; Schettini, G., "Simulation of Scattering By Cylindrical Targets Hidden Behind A Layer," Ground Penetrating Radar (GPR), 2014 15th International Conference on, pp.560, 564, June 30 2014-July 4 2014. doi: 10.1109/ICGPR.2014.6970486 Through-wall sensing of hidden objects is a topic that is receiving a wide interest in several application contexts, especially in the field of security. The success of the object retrieval relies on accurate scattering models as well as on reliable inversion algorithms. In this paper, a contribution to the modeling of direct scattering for Through-Wall Imaging applications is given. The approach deals with hidden scatterers that are circular cross-section metallic cylinders placed below a dielectric layer, and it is based on an analytical-numerical technique implementing Cylindrical Wave Approach. As the burial medium of the scatterers may be a dielectric of arbitrary permittivity, general problems of scattering by hidden objects may be considered.  When the burial medium is filled with air, the technique can simulate objects concealed in a building interior. Otherwise, simulation of geophysical problems of targets buried in a layered soil can be performed. Numerical results of practical cases are reported in the paper, showing the potentialities of the technique for its use in inversion algorithms.

Keywords: buried object detection; electromagnetic wave scattering; geophysical techniques; image processing; numerical analysis; analytical-numerical technique; buried targets; cylindrical targets; cylindrical wave approach; hidden objects; hidden scatterers; inversion algorithms; object retrieval; scattering models; through-wall imaging applications; through-wall sensing; Atmospheric modeling; Dielectrics; Electromagnetic scattering; Reliability; Slabs; buried objects; electromagnetic scattering; fourier analysis; through-wall scattering  (ID#: 15-3848)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970486&isnumber=6970371

 

Crisan, D.; Birke, R.; Barabash, K.; Cohen, R.; Gusat, M., "Datacenter Applications in Virtualized Networks: A Cross-Layer Performance Study," Selected Areas in Communications, IEEE Journal on, vol. 32, no. 1, pp. 77, 87, January 2014. doi: 10.1109/JSAC.2014.140108 Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.

Keywords: cloud computing; computer centres; overlay networks; software radio; virtualisation; cloud computing; cross layer hybrid mesoscale simulation platform; cross layer performance study; datacenter applications; datacenter workloads; network virtualization; overlay network; software defined networking overlays; virtualized networks; Delays; Encapsulation; Hardware; IP networks; Protocols; Servers; Virtualization; datacenter networks; network virtualization; overlay networks; software-defined networking  (ID#: 15-3849)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6689485&isnumber=6689238

 

Mendes, L.D.P.; Rodrigues, J.J.P.C.; Lloret, J.; Sendra, S., "Cross-Layer Dynamic Admission Control for Cloud-Based Multimedia Sensor Networks," Systems Journal, IEEE, vol. 8, no. 1, pp. 235, 246, March 2014. doi: 10.1109/JSYST.2013.2260653 Cloud-based communications system is now widely used in many application fields such as medicine, security, environment protection, etc. Its use is being extended to the most demanding services like multimedia delivery. However, there are a lot of constraints when cloud-based sensor networks use the standard IEEE 802.15.3 or IEEE 802.15.4 technologies. This paper proposes a channel characterization scheme combined to a cross-layer admission control in dynamic cloud-based multimedia sensor networks to share the network resources among any two nodes. The analysis shows the behavior of two nodes using different network access technologies and the channel effects for each technology. Moreover, the existence of optimal node arrival rates in order to improve the usage of dynamic admission control when network resources are used is also shown. An extensive simulation study was performed to evaluate and validate the efficiency of the proposed dynamic admission control for cloud-based multimedia sensor networks.

Keywords: IEEE standards; Zigbee; channel allocation; cloud computing; control engineering computing; multimedia communication; telecommunication congestion control; wireless sensor networks; channel characterization scheme; channel effects; cloud-based communications system; cloud-based sensor networks; cross-layer admission control; cross-layer dynamic admission control; dynamic cloud-based multimedia sensor networks; extensive simulation study; multimedia delivery; network access technology; network resources; optimal node arrival rates; standard IEEE 802.15.3 technology; standard IEEE 802.15.4 technology; Admission control; cloud computing; cross-layer design; multimedia communications; sensor networks  (ID#: 15-3850)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553353&isnumber=6740850

 

Jialing Mo; Qiang He; Weiping Hu, "An Adaptive Threshold De-Noising Method Based on EEMD," Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on, pp.209,214, 5-8 Aug. 2014. doi: 10.1109/ICSPCC.2014.6986184 In view of the difficulty in selecting wavelet base and decomposition level for wavelet-based de-noising method, this paper proposes an adaptive de-noising method based on Ensemble Empirical Mode Decomposition (EEMD). The autocorrelation, cross-correlation method is used to adaptively find the signal-to-noise boundary layer of the EEMD in this method. Then the noise dominant layer is filtered directly and the signal dominant layer is threshold de-noised. Finally, the de-noising signal is reconstructed by each layer component which is de-noised. This method solves the problem of mode mixing in Empirical Mode Decomposition (EMD) by using EEMD and combines the advantage of wavelet threshold. In this paper, we focus on the analysis and verification of the correctness of the adaptive determination of the noise dominant layer. The simulation experiment results prove that this de-noising method is efficient and has good adaptability.

Keywords: correlation theory; filtering theory; signal denoising; signal reconstruction; wavelet transforms; EEMD; adaptive determination correctness analysis; adaptive determination correctness verification; adaptive threshold de-noising method; autocorrelation method; cross-correlation method; de-noised layer component; de-noising signal reconstruction; decomposition level selection; ensemble empirical mode decomposition; mode mixing problem; noise dominant layer filtering; signal-to-noise boundary layer; threshold de-noised signal dominant layer; wavelet base selection; wavelet threshold; wavelet-based de-noising method; Correlation; Empirical mode decomposition; Noise reduction; Signal to noise ratio; Speech; White noise; Adaptive; Ensemble Empirical Mode Decomposition; Threshold De-noising; Wavelet Analysis  (ID#: 15-3851)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6986184&isnumber=6986138

 

Aiyetoro, G.; Takawira, F., "A Cross-layer Based Packet Scheduling Scheme for Multimedia Traffic in Satellite LTE Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6813994 This paper proposes a new cross-layer based packet scheduling scheme for multimedia traffic in satellite Long Term Evolution (LTE) network which adopts MIMO technology. The Satellite LTE air interface will provide global coverage and hence complement its terrestrial counterpart in the provision of mobile services (especially multimedia services) to users across the globe. A dynamic packet scheduling scheme is very important towards actualizing an effective utilization of the limited available resources in satellite LTE networks without compromise to the Quality of Service (QoS) demands of multimedia traffic. Hence, the need for an effective packet scheduling algorithm cannot be overemphasized. The aim of this paper is to propose a new scheduling algorithm tagged Cross-layer Based Queue-Aware (CBQA) Scheduler that will provide a good trade-off among QoS, fairness and throughput. The newly proposed scheduler is compared to existing ones through simulations and various performance indices have been used. A land mobile dual-polarized GEO satellite system has been considered for this work.

Keywords: Long Term Evolution; MIMO communication ;artificial satellites; land mobile radio; mobile satellite communication; multimedia communication; packet radio networks; quality of service; telecommunication traffic; CBQA scheduler; Long Term Evolution; MIMO technology; cross-layer based packet scheduling algorithm; cross-layer based queue-aware scheduler; global coverage; land mobile dual-polarized GEO satellite system; mobile services; multimedia traffic QoS demands; quality of service; satellite LTE air interface; satellite LTE network; terrestrial counterpart; Delays; MIMO; Quality of service; Satellite broadcasting; Satellites; Scheduling algorithms; Throughput  (ID#: 15-3852)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813994&isnumber=6813963

 

Sarikaya, Y.; Ercetin, O.; Koksal, C.E., "Confidentiality-Preserving Control of Uplink Cellular Wireless Networks Using Hybrid ARQ," Networking, IEEE/ACM Transactions on, vol. PP, no. 99, pp.1, 1, 26 June 2014. doi: 10.1109/TNET.2014.2331077 We consider the problem of cross-layer resource allocation with information-theoretic secrecy for uplink transmissions in time-varying cellular wireless networks. Particularly, each node in an uplink cellular network injects two types of traffic, confidential and open at rates chosen in order to maximize a global utility function while keeping the data queues stable and meeting a constraint on the secrecy outage probability. The transmitting node only knows the distribution of channel gains. Our scheme is based on Hybrid Automatic Repeat Request (HARQ) transmission with incremental redundancy. We prove that our scheme achieves a utility, arbitrarily close to the maximum achievable. Numerical experiments are performed to verify the analytical results and to show the efficacy of the dynamic control algorithm.

Keywords: Automatic repeat request; Base stations; Decoding; Heuristic algorithms; Mutual information; Uplink; Wireless networks; Cross-layer optimization; hybrid automatic repeat request (ARQ); physical-layer security  (ID#: 15-3853)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844902&isnumber=4359146

 

Elwell, J.; Riley, R.; Abu-Ghazaleh, N.; Ponomarev, D., "A Non-Inclusive Memory Permissions Architecture For Protection Against Cross-Layer Attacks," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on, pp.201,212, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835931 Protecting modern computer systems and complex software stacks against the growing range of possible attacks is becoming increasingly difficult. The architecture of modern commodity systems allows attackers to subvert privileged system software often using a single exploit. Once the system is compromised, inclusive permissions used by current architectures and operating systems easily allow a compromised high-privileged software layer to perform arbitrary malicious activities, even on behalf of other software layers. This paper presents a hardware-supported page permission scheme for the physical pages that is based on the concept of non-inclusive sets of memory permissions for different layers of system software such as hypervisors, operating systems, and user-level applications. Instead of viewing privilege levels as an ordered hierarchy with each successive level being more privileged, we view them as distinct levels each with its own set of permissions. Such a permission mechanism, implemented as part of a processor architecture, provides a common framework for defending against a range of recent attacks. We demonstrate that such a protection can be achieved with negligible performance overhead, low hardware complexity and minimal changes to the commodity OS and hypervisor code.

Keywords: security of data; storage management; supervisory programs; arbitrary malicious activities; complex software stack protection; cross-layer attack protection; hardware complexity; hardware-supported page permission scheme; high-privileged software layer; hypervisor code; modern commodity systems; modern computer system protection; noninclusive memory permissions architecture; operating systems;ordered hierarchy; performance overhead; permission mechanism; privilege level; privileged system software; processor architecture; user-level applications; Hardware; Memory management; Permission; System software; Virtual machine monitors  (ID#: 15-3854)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835931&isnumber=6835920

 

Juzi Zhao; Subramaniam, S.; Brandt-Pearce, M., "Intradomain and Interdomain QoT-aware RWA for Translucent Optical Networks," Optical Communications and Networking, IEEE/OSA Journal of, vol. 6, no. 6, pp.536, 548, June 2014. doi: 10.1364/JOCN.6.000536 Physical impairments in long-haul optical networks mandate that optical signals be regenerated within the (so-called translucent) network. Being expensive devices, regenerators are expected to be allocated sparsely and must be judiciously utilized. Next-generation optical-transport networks will include multiple domains with diverse technologies, protocols, granularities, and carriers. Because of confidentiality and scalability concerns, the scope of network-state information (e.g., topology, wavelength availability) may be limited to within a domain. In such networks, the problem of routing and wavelength assignment (RWA) aims to find an adequate route and wavelength(s) for lightpaths carrying end-to-end service demands. Some state information may have to be explicitly exchanged among the domains to facilitate the RWA process. The challenge is to determine which information is the most critical and make a wise choice for the path and wavelength(s) using the limited information. Recently, a framework for multidomain path computation called backward-recursive path-computation (BRPC) was standardized by the Internet Engineering Task Force. In this paper, we consider the RWA problem for connections within a single domain and interdomain connections so that the quality of transmission (QoT) requirement of each connection is satisfied, and the network-level performance metric of blocking probability is minimized. Cross-layer heuristics that are based on dynamic programming to effectively allocate the sparse regenerators are developed, and extensive simulation results are presented to demonstrate their effectiveness.

Keywords: dynamic programming; multipath channels; probability; telecommunication network routing; telecommunication security; wavelength assignment; wavelength division multiplexing; BRPC; Internet Engineering Task Force; backward-recursive path-computation; blocking probability; confidentiality concerns; cross-layer heuristics; dynamic programming; end-to-end service demands; interdomain  QoT-aware RWA; intradomain QoT-aware RWA; multidomain path computation; network-level performance metric minimization; network-state information; next-generation optical-transport networks; optical signal regeneration; physical impairments; quality-of-transmission requirement; routing-and-wavelength assignment problem; scalability concerns; translucent long-haul optical networks; wavelength division multiplexing-based optical networks; Availability; Bit error rate; Heuristic algorithms; Nonlinear optics; Optical fiber networks; Repeaters; Routing; Backward recursive path computation (BRPC);Cross-layer RWA; Dynamic programming; Multidomain; Physical impairments; Translucent optical networks  (ID#: 15-3855)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837338&isnumber=6837333

 

Jia-Lun Tsai, "An Improved Cross-Layer Privacy-Preserving Authentication in WAVE-Enabled VANETs," Communications Letters, IEEE, vol. 18, no. 11, pp. 1931, 1934, Nov. 2014. doi: 10.1109/LCOMM.2014.2323291 In 2013, Biswas and Misic proposed a new privacy-preserving authentication scheme for WAVE-based vehicular ad hoc networks (VANETs), claiming that they used a variant of the Elliptic Curve Digital Signature Algorithm (ECDSA). However, our study has discovered that the authentication scheme proposed by them is vulnerable to a private key reveal attack. Any malicious receiving vehicle who receives a valid signature from a legal signing vehicle can gain access to the signing vehicle private key from the learned valid signature. Hence, the authentication scheme proposed by Biswas and Misic is insecure. We thus propose an improved version to overcome this weakness. The proposed improved scheme also supports identity revocation and trace. Based on this security property, the CA and a receiving entity (RSU or OBU) can check whether a received signature has been generated by a revoked vehicle. Security analysis is also conducted to evaluate the security strength of the proposed authentication scheme.

Keywords: data privacy; digital signatures; private key cryptography; public key cryptography; telecommunication security; vehicular ad hoc networks; ECDSA; WAVE-based vehicular ad hoc networks; WAVE-enabled VANET; elliptic curve digital signature algorithm; identity revocation; identity trace; improved cross-layer privacy-preserving authentication scheme; legal signing vehicle; malicious receiving vehicle; private key reveal attack; receiving entity; security analysis; security strength evaluation; valid signature; Authentication; Digital signatures; Elliptic curves; Law; Public key; Vehicles; Privacy-preserving; VANETs ;authentication scheme; elliptic curve digital signature algorithm (ECDSA) (ID#: 15-3856)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814798&isnumber=6949702

 

Guyue Li; Aiqun Hu, "An Approach To Resist Blind Source Separation Attacks Of Speech Signals," Communications Security Conference (CSC 2014), 2014, pp.1,7, 22-24 May 2014. doi: 10.1049/cp.2014.0738 Recently, there has been great interest in the physical layer security technique which exploits the artificial noise (AN) to enlarge the channel condition between the legitimate receiver and the eavesdropper. However, in certain communication scenery, this strategy may suffer from some attacks in the signal processing perspective. In this paper, we consider speech signals and the scenario in which the eavesdropper has the similar channel performance compared to the legitimate receiver. We design the optimal artificial noise (AN) to resist the attack of the eavesdropper who uses the blind source separation (BSS) technology to reconstruct the secret information. The Optimal AN is obtained by making a tradeoff between results of direct eavesdropping and reconstruction. The simulation results show that the AN we proposed has better performance than that of the white Gaussian AN to resist the BSS attacks effectively.

Keywords: Physical layer security; artificial noise; blind source separation (BSS); cross correlation coefficient  (ID#: 15-3857)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992231&isnumber=6919880


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Cross Site Scripting (2014 Year in Review)

SoS Newsletter Logo

Cross Site Scripting
(2014 Year in Review)

A type of computer security vulnerability typically found in Web applications, cross-site scripting (XSS) enables attackers to inject client-side script into Web pages viewed by other users. Attackers may use a cross-site scripting vulnerability to bypass access controls such as the same origin policy. Consequences may range from petty nuisance to significant security risk, depending on the value of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner. A frequent method of attack, research is being conducted on methods to prevent, detect, and mitigate XSS attacks. The articles cited here were published in 2014.

Gupta, M.K.; Govil, M.C.; Singh, G., "Static Analysis Approaches to Detect SQL Injection and Cross Site Scripting Vulnerabilities in Web Applications: A survey," Recent Advances and Innovations in Engineering (ICRAIE), 2014, pp. 1, 5, 9-11 May 2014. doi: 10.1109/ICRAIE.2014.6909173 Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.

Keywords: Internet; SQL; program diagnostics; security of data; software maintenance; software reliability; source code (software);SQL injection; SQLI; Web applications; XSS; attack prevention; cross site scripting vulnerabilities; defensive programming; financial transaction; health problem; legacy Web applications; malicious users; programming mistakes; security vulnerabilities; security weaknesses; social communications; software development life cycle; source code; static analysis; vulnerability detection; Analytical models; Guidelines; Manuals; Programming; Servers; Software; Testing; SQL injection; cross site scripting; static analysi; vulnerabilities; web application (ID#: 15-3789)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909173&isnumber=6909103

Gupta, M.K.; Govil, M.C.; Singh, G., "A Context-Sensitive Approach For Precise Detection Of Cross-Site Scripting Vulnerabilities," Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on, pp.7,12, 9-11 Nov. 2014. doi: 10.1109/INNOVATIONS.2014.6987553 Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.

Keywords: Internet; hypermedia markup languages; invasive software; source code (software);Web application; XSS vulnerability; cross-site scripting vulnerability; defensive programming based HTML context-sensitive approach; financial transaction; health services; malicious operation; malicious user; malware; precise detection; sensitive information; social communication; source code; taint analysis;Browsers;Context;HTML;Security;Servers;Software;Standards;Cross-Site Scripting; Software Development Life Cycle; Taint Analysis; Vulnerability Detection; XSS Attacks (ID#: 15-3790)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987553&isnumber=6985764

Rocha, T.S.; Souto, E., "ETSSDetector: A Tool to Automatically Detect Cross-Site Scripting Vulnerabilities," Network Computing and Applications (NCA), 2014 IEEE 13th International Symposium on,, pp.306,309, 21-23 Aug. 2014. doi: 10.1109/NCA.2014.53 The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.

Keywords: Internet; interactive systems; security of data; ETSS Detector; Web applications; XSS attacks; cross-site scripting vulnerabilities; interactivity; Browsers; Data mining; Databases; Filling; Qualifications; Security; Testing; Cross-Site Scripting; ETSSDetector; vulnerabilities (ID#: 15-3791)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6924244&isnumber=6924186

Mewara, B.; Bairwa, S.; Gajrani, J.; Jain, V., "Enhanced Browser Defense For Reflected Cross-Site Scripting," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on, pp. 1, 6, 8-10 Oct. 2014. doi: 10.1109/ICRITO.2014.7014761 Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.

Keywords: Web sites; online front-ends; search engines; security of data; Web browser; Web page; XSS attack; XSS-Me; client-side solution; enhanced browser defense; malicious input; malicious script; open source Mozilla Firefox security extension; reflected XSS vulnerability; reflected cross-site scripting; sensitive information; sophisticated attack; unfiltered reflection; vulnerable Web application; Browsers; HTML; Information filters; Security; Testing; Vectors; XSS; attack vectors; defense; filter; special characters (ID#: 15-3792)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014761&isnumber=7014644

Mewara, B.; Bairwa, S.; Gajrani, J., "Browser's Defenses Against Reflected Cross-Site Scripting Attacks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp. 662, 667, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884928 Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.

Keywords: online front-ends; security of data; Google Chrome 32;Internet Explorer 11;Mozilla Firefox 27;Web based attack; Web browsers; XSS attack; XSS filters; XSS-Me; online Web applications ; reflected cross-site scripting attacks; Browsers; Security; Thyristors; JavaScript; Reflected XSS;XSS-Me; attacker; bypass; exploit; filter (ID#: 15-3793)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884928&isnumber=6884878

Guowei Dong; Yan Zhang; Xin Wang; Peng Wang; Liangkun Liu, "Detecting Cross Site Scripting Vulnerabilities Introduced by HTML5," Computer Science and Software Engineering (JCSSE), 2014 11th International Joint Conference on, pp.319,323, 14-16 May 2014. doi: 10.1109/JCSSE.2014.6841888 Recent years, HTML5 is widely adopted in popular browsers. Unfortunately, as a new Web standard, HTML5 may expand the Cross Site Scripting (XSS) attack surface as well as improve the interactivity of the page. In this paper, we identified 14 XSS attack vectors related to HTML5 by a systematic analysis about new tags and attributes. Based on these vectors, a XSS test vector repository is constructed and a dynamic XSS vulnerability detection tool focusing on Webmail systems is implemented. By applying the tool to some popular Webmail systems, seven exploitable XSS vulnerabilities are found. The evaluation result shows that our tool can efficiently detect XSS vulnerabilities introduced by HTML5.

Keywords: Internet; Web sites; hypermedia markup languages; security of data;HTML5;Web standard; Webmail system; XSS attack surface; XSS attack vectors; XSS test vector repository; cross site scripting vulnerability detection; dynamic XSS vulnerability detection tool; systematic analysis;HTML5;attack surface; dynamic detection (ID#: 15-3794)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841888&isnumber=6841829

Abgrall, E.; Le Traon, Y.; Gombault, S.; Monperrus, M., "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.34,41, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.63 One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.

Keywords: online front-ends; regression analysis; security of data; Web applications; Web browser attack surface; XSS vector testing; cross-site scripting; systematic security regression testing; Browsers; HTML; Mobile communication;Payloads;Security;Testing;Vectors;XSS;browser;regression;security;testing;web}, (ID#: 15-3795)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825636&isnumber=6825623

Jinxin You; Fan Guo, "Improved CSRFGuard for CSRF Attacks Defense On Java EE Platform," Computer Science & Education (ICCSE), 2014 9th International Conference on, pp. 1115, 1120, 22-24 Aug. 2014. doi: 10.1109/ICCSE.2014.6926635 CSRFGuard is a tool running on the Java EE platform to defend Cross-Site Request Forgery (CSRF) attacks, but there are some shortcomings: scripts should be inserted manually, dynamically created requests cannot be effectively handled as well as defense can be bypassed through Cross-Site Scripting (XSS). Corresponding improvements were made according to the shortcomings. The Servlet filter was used to intercept responses, and responses of pages' source codes were stored by a custom response wrapper class to add script tags, so that scripts were automatically inserted. JavaScript event delegation mechanism was used to bind forms with onfocus and onsubmit events, then dynamically created requests were effectively handled. Token dynamically added through event triggered effectively prevented defense bypassed through XSS. The experimental results show that improved CSRFGuard can be effective to defend CSRF attacks.

Keywords: Java; security of data; CSRF attack defense; CSRFGuard; Java EE platform; JavaScript event delegation mechanism; Servlet filter; XSS; cross-site request forgery attack; cross-site scripting ;custom response wrapper; script tags; Browsers; Computers; HTML; Security; Welding; CSRFGuard; Cross-Site Scripting; Cross-site Request Forgery; Event Delegation; Java EE (ID#: 15-3796)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6926635&isnumber=6926406

Bozic, J.; Wotawa, F., "Security Testing Based on Attack Patterns," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp .4, 11, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.58 Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.

Keywords: Internet; Unified Modeling Language; program testing; security of data; software prototyping; Internet; UML state charts; agile software development processes; attack patterns; security testing; Adaptation models; Databases; HTML; Security; Software; Testing; Unified modeling language; Attack pattern; SQL injection; UML state machine; cross-site scripting; model-based testing; security testing (ID#: 15-3797)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825631&isnumber=6825623

Wenmin Xiao; Jianhua Sun; Hao Chen; Xianghua Xu, "Preventing Client Side XSS with Rewrite Based Dynamic Information Flow," Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on, pp.238,243, 13-15 July 2014. doi: 10.1109/PAAP.2014.10 This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.

Keywords: Internet; Java; data flow analysis; online front-ends; security of data; JSTFlow; JavaScript abstract syntax tree; JavaScript code; Web applications; XSS attacks; abstract intermediate representation; browser proxy; browsers; client side XSS; code rewrite; fine-grained isolation; information flow tracking framework; performance overhead; rewrite based dynamic information flow; sensitive information leaks; taint engine; tainted information flow; Abstracts; Browsers; Data models;Engines;Security;Semantics;Syntactics;JavaScript;cross-site scripting; information flow analysis; information security; taint model (ID#: 15-3798)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916471&isnumber=6916413

Sayed, B.; Traore, I., "Protection Against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 261, 268, 13-16 May 2014. doi: 10.1109/WAINA.2014.52 The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Keywords: Internet; computer crime; data protection; invasive software; IDS systems; Web 2.0 client-side Web attacks; antivirus solutions; botnets; cross-site request forgery; cross-site scripting; cyber-criminals; firewalls; information flow control ;information leakage; legitimate Web sites; malicious script injection ;protection systems; secure browsing environment; social networks; Browsers; Feature extraction; Security; Semantics; Servers; Web 2.0;Web pages; AJAX; Client-side web attacks; Information Flow Control; Web 2.0 (ID#: 15-3799)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560

Buja, G.; Bin Abd Jalil, K.; Bt Hj Mohd Ali, F.; Rahman, T.F.A., "Detection Model For SQL Injection Attack: An Approach For Preventing A Web Application From The SQL Injection Attack," Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on, pp. 60, 64, 7-8 April 2014. doi: 10.1109/ISCAIE.2014.7010210 Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.

Keywords: Internet; SQL; authorisation; computer crime; CSRF; SQL injection attack; Web application; Web application vulnerabilities; Web vulnerability detection model; XSS; cross-site request forgery; cross-site scripting; hacking tools; hacking tutorials; structured query language injection; system cracker; Computational modeling; Databases; Internet; Security; Testing; Uniform resource locators; Web pages; CSRF; SQL injection; XSS; vulnerabilities; web application (ID#: 15-3800)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7010210&isnumber=7010190

Blankstein, A.; Freedman, M.J., "Automating Isolation and Least Privilege in Web Services," Security and Privacy (SP), 2014 IEEE Symposium on, pp. 133,148, 18-21 May 2014. doi: 10.1109/SP.2014.16 In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.

Keywords: Web services; security of data; Django web framework; HTML5 sandboxing techniques; Passe system; Web services; XSS attack; client-facing applications; control-flow relationship; cross-site scripting attack; data-flow relationship; database queries; query results; sandboxed process; security guarantee; shared-memory-space application; Browsers; Databases; Libraries; Prototypes; Runtime; Security; Servers; capabilities; isolation; principle of least privilege; security policy inference; web security (ID#: 15-3801)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956561&isnumber=6956545

Coelho Martins da Fonseca, J.C.; Amorim Vieira, M.P., "A Practical Experience on the Impact of Plugins in Web Security," Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on, pp. 21, 30, 6-9 Oct. 2014. doi: 10.1109/SRDS.2014.20 In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.

Keywords: Internet; content management; program diagnostics; security of data; SQL injection vulnerabilities; Web application plugin vulnerabilities; Web security; content management system; cross site scripting; false positive rates; static code analysis tools; Content management; Databases; Manuals; Security; Testing; Web pages; Web applications; plugins; security; static analysis; vulnerabilities (ID#: 15-3802)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983376&isnumber=6983362

Kumar, A.; Reddy, K., "Constructing Secure Web Applications With Proper Data Validations," Recent Advances and Innovations in Engineering (ICRAIE), 2014, pp.1 ,5, 9-11 May 2014. doi: 10.1109/ICRAIE.2014.6909304 With the advent of World Wide Web, information sharing through internet increased drastically. So web applications security is today's most significant battlefield between attackers and resources of web service. It is likely to remain so for the foreseeable future. By considering recent attacks it has been found that major attacks in Web Applications have been carried out even when system having most significant network level security. Poor input validation mechanisms that using in Web Applications shall causes to launching vulnerable web applications, which easy to exploit easy in future stages. Critical Web Application Vulnerabilities like Cross Site Scripting (XSS) and Injections (SQL, PHP, LDAP, SSL, XML, Command, and Code) are happen because of base level Validations, and it is enough to update system in unauthorized way or may be causes to exploit the system. In this paper we present those issues in data validations strategies, to avoid deployment of vulnerable web applications.

Keywords: Internet; computer network security; critical web application vulnerabilities; cross site scripting; data validations; injections; secure Web applications; Computational modeling; HTML; XML; injection; security; validation; vulnerability; xss (ID#: 15-3803)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909304&isnumber=6909103

Aydin, A.; Alkhalaf, M.; Bultan, T., "Automated Test Generation from Vulnerability Signatures," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, pp.193,202, March 31 2014-April 4 2014. doi: 10.1109/ICST.2014.32 Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.

Keywords: Web services; authoring languages; automata theory; digital signatures; program diagnostics; program testing; attack string discovery; automata-based static string analysis techniques; automated test case generation; automatic vulnerability signature computation; insecure Web applications; path coverage; scripting languages ;state; static string analysis undecidability; transition; Algorithm design and analysis; Approximation methods; Automata; Databases; HTML; Security; Testing; automata-based test generation; string analysis; validation and sanitization; vulnerability signatures (ID#: 15-3804)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823881&isnumber=6823846

SHAR, L.; Briand, L.; Tan, H., "Web Application Vulnerability Prediction using Hybrid Program Analysis and Machine Learning," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, pp.1, 1, 20 November 2014. doi: 10.1109/TDSC.2014.2373377 Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77% recall and 5% probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24% higher recall and 3% lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.

Keywords: Data models; HTML; Security; Semisupervised learning; Servers; Software; Training; Vulnerability prediction; empirical study; input validation and sanitization; program analysis; security measures (ID#: 15-3805)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963442&isnumber=4358699

Quirolgico, Steve, "App Vetting Systems: Issues And Challenges," IT Professional Conference (IT Pro), 2014, pp.1,13, 22-22 May 2014. doi: 10.1109/ITPRO.2014.7029287 App vetting is the process of approving or rejecting an app prior to deployment on a mobile device. The decision to approve or reject an app is based on the organization's security requirements and the type and severity of security vulnerabilities found in the app. * Security vulnerabilities including Cross Site Scripting (XSS), information leakage, authentication and authorization, session management, and SQL injection can be exploited to steal information or control a device.

Keywords: Computer security; Information technology; Laboratories; Mobile communication; Mobile handsets; NIST (ID#: 15-3806)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029287&isnumber=7029273

Ferguson, B.; Tall, A.; Olsen, D., "National Cyber Range Overview," Military Communications Conference (MILCOM), 2014 IEEE, pp.123,128, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.27 The National Cyber Range (NCR) is an innovative Department of Defense (DoD) resource originally established by the Defense Advanced Research Projects Agency (DARPA) and now under the purview of the Test Resource Management Center (TRMC). It provides a unique environment for cyber security testing throughout the program development life cycle using unique methods to assess resiliency to advanced cyberspace security threats. This paper describes what a cyber security range is, how it might be employed, and the advantages a program manager (PM) can gain in applying the results of range events. Creating realism in a test environment isolated from the operational environment is a special challenge in cyberspace. Representing the scale and diversity of the complex DoD communications networks at a fidelity detailed enough to realistically portray current and anticipated attack strategies (e.g., Malware, distributed denial of service attacks, cross-site scripting) is complex. The NCR addresses this challenge by representing an Internet-like environment by employing a multitude of virtual machines and physical hardware augmented with traffic emulation, port/protocol/service vulnerability scanning, and data capture tools. Coupled with a structured test methodology, the PM can efficiently and effectively engage with the Range to gain cyberspace resiliency insights. The NCR capability, when applied, allows the DoD to incorporate cyber security early to avoid high cost integration at the end of the development life cycle. This paper provides an overview of the resources of the NCR which may be especially helpful for DoD PMs to find the best approach for testing the cyberspace resiliency of their systems under development.

Keywords: computer network security; virtual machines; Department of Defense; DoD communication networks; NCR; National Cyber Range; cyberspace resiliency testing; cyberspace security threats; traffic emulation; virtual machines; Cyberspace; Malware; Resource management; Testing; US Department of Defense; cyberspace; range; security; test (ID#: 15-3807)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956748&isnumber=6956719


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Moving Target Defense (2014 Year in Review)

 

 
SoS Newsletter Logo

MovingTarget Defense
(2014 Year in Review)

 

One of the research thrusts outlined in the 2011 report Trustworthy Cyberspace: Strategic Plan for the Federal Cybersecurity Research and Development Program was Moving Target (MT) research and development that results in the presentation of a dynamic attack surface to an adversary, increasing the work factor necessary to successfully attack and exploit a cyber target. The subsequent Symposium on Moving Target Research brought together and published the work of the MT community to provide a basis for building on the current state of the art as of June 2012. The works cited here were published in 2014.  

 

Yue-Bin Luo; Bao-Sheng Wang; Gui-Lin Cai, "Effectiveness of Port Hopping as a Moving Target Defense," Security Technology (SecTech), 2014 7th International Conference on, pp.7,10, 20-23 Dec. 2014. doi: 10.1109/SecTech.2014.9 Port hopping is a typical moving target defense, which constantly changes service port number to thwart reconnaissance attack. It is effective in hiding service identities and confusing potential attackers, but it is still unknown how effective port hopping is and under what circumstances it is a viable proactive defense because the existed works are limited and they usually discuss only a few parameters and give some empirical studies. This paper introduces urn model and quantifies the likelihood of attacker success in terms of the port pool size, number of probes, number of vulnerable services, and hopping frequency. Theoretical analysis shows that port hopping is an effective and promising proactive defense technology in thwarting network attacks.

Keywords: {security of data; attacker success likelihood; moving target defense; network attacks; port hopping; proactive defense technology; reconnaissance attack; service identity hiding; urn model; Analytical models; Computers; Ports (Computers); Probes; Reconnaissance; Servers; moving target defense; port hopping; proactive defense; urn model  (ID#: 15-3858) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023273&isnumber=7023263

 

Carroll, T.E.; Crouse, M.; Fulp, E.W.; Berenhaut, K.S., "Analysis of Network Address Shuffling As A Moving Target Defense," Communications (ICC), 2014 IEEE International Conference on,  pp. 701, 706, 10-14 June 2014. doi: 10.1109/ICC.2014.6883401 Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.

Keywords: probability; security of data; moving target defense; network address remapping; network address shuffling; probabilistic models; Computational modeling; Computers; IP networks; Information systems; Probes; Reconnaissance (ID#: 15-3859) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883401&isnumber=6883277

 

Wei Peng; Feng Li; Chin-Tser Huang; Xukai Zou, "A Moving-Target Defense Strategy For Cloud-Based Services With Heterogeneous And Dynamic Attack Surfaces," Communications (ICC), 2014 IEEE International Conference on, pp. 804, 809, 10-14 June 2014. doi: 10.1109/ICC.2014.6883418 Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.

Keywords: cloud computing; probability; security of data; S-shaped generalized logistic function; VM migration-snapshotting; attack-surface heterogeneity-and-dynamics awareness; attacker intelligence; cloud infrastructures; cloud-based service security; cloud-specific features; configuration staticity; deep automation; diversity-compatibility; dynamic attack surfaces; dynamic probability; heterogeneous attack surfaces; homogeneity problem; moving-target defense strategy; probabilistic MTD service deployment; replacement pool; service attack surface; Equations; Information systems; Mathematical model; Probabilistic logic; Probes; Security; Uncertainty; moving-target defense; probabilistic algorithm; risk modeling; simulation (ID#: 15-3860) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883418&isnumber=6883277

 

Morrell, Christopher; Ransbottom, J.Scot; Marchany, Randy; Tront, Joseph G., "Scaling IPv6 Address Bindings In Support Of A Moving Target Defense," Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for, pp. 440, 445, 8-10 Dec. 2014. doi: 10.1109/ICITST.2014.7038852 Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.

Keywords: Internet; Kernel; Security; Servers; Sockets; Standards; Time factors; IPv6; Moving Target Defense; Networking; Sockets (ID#: 15-3861) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7038852&isnumber=7038754

 

Kampanakis, P.; Perros, H.; Beyene, T., "SDN-based solutions for Moving Target Defense network protection," A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, pp.1,6, 19-19 June 2014. doi: 10.1109/WoWMoM.2014.6918979 Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads.

Keywords: computer network security; Cisco One Platform Kit; SDN-based MTD methods; SDN-based solutions; attack surface obfuscation; central control point; countermeasures attackers; moving target defense network protection; network-based MTD techniques; software-defined networking; Algorithm design and analysis; Delays; Payloads; Ports (Computers); Reconnaissance; Servers; Cisco onePK; MTD; Moving Target Defense; SDN; Software Defined Networks (ID#: 15-3862) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918979&isnumber=6918912

 

Carvalho, M.; Ford, R., "Moving-Target Defenses for Computer Networks," Security & Privacy, IEEE, vol. 12, no.2, pp. 73, 76, Mar.-Apr. 2014. doi: 10.1109/MSP.2014.30 One of the criticisms of traditional security approaches is that they present a static target for attackers. Critics state, with good justification, that by allowing the attacker to reconnoiter a system at leisure to plan an attack, defenders are immediately disadvantaged. To address this, the concept of moving-target defense (MTD) has recently emerged as a new paradigm for protecting computer networks and systems.

Keywords: computer network security; MTD; computer network protection; moving-target defenses; security approach; static target; Complexity theory; Computer crime; Computer security; Cyberspace; Network security; Target tracking; MTD; attack; moving-target defense; system security ;target (ID#: 15-3863) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798537&isnumber=6798534

 

Hong, J.B.; Dong Seong Kim, "Scalable Security Models for Assessing Effectiveness of Moving Target Defenses," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp. 515, 526, 23-26 June 2014. doi: 10.1109/DSN.2014.54 Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.

Keywords: graph theory; security of data; HARMs; IMs; MTD; attack graphs; effectiveness assessment; exhaustive search method; hierarchical attack representation models; importance measures; moving target defenses; networked system security; scalable security models; security assessment; Analytical models; Computational modeling; Diversity methods; Internet; Linux; Measurement; Security; Attack Representation Model; Importance Measures; Moving Target Defense; Security Analysis; Security Modeling Techniques (ID#: 15-3864) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903607&isnumber=6903544

 

Thompson, M.; Evans, N.; Kisekka, V., "Multiple OS Rotational Environment An Implemented Moving Target Defense," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900086 Cyber-attacks continue to pose a major threat to existing critical infrastructure. Although suggestions for defensive strategies abound, Moving Target Defense (MTD) has only recently gained attention as a possible solution for mitigating cyber-attacks. The current work proposes a MTD technique that provides enhanced security through a rotation of multiple operating systems. The MTD solution developed in this research utilizes existing technology to provide a feasible dynamic defense solution that can be deployed easily in a real networking environment. In addition, the system we developed was tested extensively for effectiveness using CORE Impact Pro (CORE), Nmap, and manual penetration tests. The test results showed that platform diversity and rotation offer improved security. In addition, the likelihood of a successful attack decreased proportionally with time between rotations.

Keywords: operating systems (computers);security of data; CORE; CORE Impact Pro; MTD technique; Nmap; cyber-attacks mitigation; defensive strategies; manual penetration test; moving target defense; multiple OS rotational environment; operating systems; Availability; Fingerprint recognition; IP networks; Operating systems; Security; Servers; Testing; insert (ID#: 15-3865) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900086&isnumber=6900080

 

Marttinen, A.; Wyglinski, A.M.; Jantti, R., "Moving-Target Defense Mechanisms Against Source-Selective Jamming Attacks In Tactical Cognitive Radio MANETs," Communications and Network Security (CNS), 2014 IEEE Conference on, pp.14,20, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997460 In this paper, we propose techniques for combating source selective jamming attacks in tactical cognitive MANETs. Secure, reliable and seamless communications are important for facilitating tactical operations. Selective jamming attacks pose a serious security threat to the operations of wireless tactical MANETs since selective strategies possess the potential to completely isolate a portion of the network from other nodes without giving a clear indication of a problem. Our proposed mitigation techniques use the concept of address manipulation, which differ from other techniques presented in open literature since our techniques employ de-central architecture rather than a centralized framework and our proposed techniques do not require any extra overhead. Experimental results show that the proposed techniques enable communications in the presence of source selective jamming attacks. When the presence of a source selective jammer blocks transmissions completely, implementing a proposed flipped address mechanism increases the expected number of required transmission attempts only by one in such scenario. The probability that our second approach, random address assignment, fails to solve the correct source MAC address can be as small as 10-7 when using accurate parameter selection.

Keywords: cognitive radio; computer network security; interference suppression; jamming; military communication; mobile ad hoc networks; probability; telecommunication network reliability; address manipulation; flipped address mechanism; moving target defense mechanism; parameter selection; probability; random address assignment; reliable communication; seamless communication; secure communication; source MAC address; source selective jammer block transmission; source selective jamming attack combination; tactical cognitive radio MANET; Ad hoc networks; Communication system security; Delays; Jamming; Mobile computing; Wireless communication (ID#: 15-3866) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997460&isnumber=6997445

 

Yu Li; Rui Dai; Junjie Zhang, "Morphing Communications Of Cyber-Physical Systems Towards Moving-Target Defense," Communications (ICC), 2014 IEEE International Conference on, pp. 592, 598, 10-14 June 2014. doi: 10.1109/ICC.2014.6883383 Since the massive deployment of Cyber-Physical Systems (CPSs) calls for long-range and reliable communication services with manageable cost, it has been believed to be an inevitable trend to relay a significant portion of CPS traffic through existing networking infrastructures such as the Internet. Adversaries who have access to networking infrastructures can therefore eavesdrop network traffic and then perform traffic analysis attacks in order to identify CPS sessions and subsequently launch various attacks. As we can hardly prevent all adversaries from accessing network infrastructures, thwarting traffic analysis attacks becomes indispensable. Traffic morphing serves as an effective means towards this direction. In this paper, a novel traffic morphing algorithm, CPSMorph, is proposed to protect CPS sessions. CPSMorph maintains a number of network sessions whose distributions of inter-packet delays are statistically indistinguishable from those of typical network sessions. A CPS message will be sent through one of these sessions with assured satisfaction of its time constraint. CPSMorph strives to minimize the overhead by dynamically adjusting the morphing process. It is characterized by low complexity as well as high adaptivity to changing dynamics of CPS sessions. Experimental results have shown that CPSMorph can effectively performing traffic morphing for real-time CPS messages with moderate overhead.

Keywords: Internet; computer network reliability; telecommunication traffic; CPS traffic; CPSMorph traffic morphing algorithm; Internet; cyber-physical systems; eavesdrop network traffic; inter-packet delays; long-range communication services; morphing communications; moving-target defense; network sessions ;networking infrastructures; reliable communication services; thwarting traffic analysis attacks; traffic analysis attacks; Algorithm design and analysis; Delays; Information systems; Real-time systems; Security; Silicon; Time factors (ID#: 15-3867) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883383&isnumber=6883277

 

Fink, G.A.; Haack, J.N.; McKinnon, A.D.; Fulp, E.W., "Defense on the Move: Ant-Based Cyber Defense," Security & Privacy, IEEE, vol. 12, no. 2, pp.36,43, Mar.-Apr. 2014. doi: 10.1109/MSP.2014.21 Many common cyberdefenses (like firewalls and intrusion-detection systems) are static, giving attackers the freedom to probe them at will. Moving-target defense (MTD) adds dynamism, putting the systems to be defended in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers' ability to rely on prior experience without requiring motion in the protected infrastructure. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The authors' mobile resilient defense, Ant-Based Cyber Defense (ABCD), is a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. ABCD provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures such as the smart electric grid.

Keywords: optimisation; security of data; ant-based cyber defense; defended systems; mobile resilient defense; moving-target defense; protected infrastructure; Computer crime; Computer security; Cyberspace; Database systems; Detectors; Malware; Mobile communication; Particle swarm intelligence; Statistics; Target tracking; MTD; cybersecurity; digital ants; moving-target defense; swarm intelligence (ID#: 15-3868) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798536&isnumber=6798534

 

Quan Jia; Huangxin Wang; Fleck, D.; Fei Li; Stavrou, A.; Powell, W., "Catch Me If You Can: A Cloud-Enabled DDoS Defense," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.264,275, 23-26 June 2014. doi: 10.1109/DSN.2014.35 We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency.

Keywords: client-server systems; cloud computing; computer network security; Amazon EC2; Internet services; attack dilution strategies; attack mitigation; client-to-server reassignment plans; cloud computing; cloud-enabled DDoS defense; computational distributed denial-of-service attacks; intelligent client reassignment; large-scale DDoS attacks; moving target mechanism; moving targets; network attacks; optimal reassignment strategy; shuffling mechanism; system architecture; turning victim servers; Cloud computing; Computer architecture; Computer crime; IP networks; Servers; Web and internet services; Cloud; DDoS; Moving Target Defense; Shuffling (ID#: 15-3869) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903585&isnumber=6903544

 

Moody, W.C.; Hongxin Hu; Apon, A., "Defensive maneuver cyber platform modeling with Stochastic Petri Nets," Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), 2014 International Conference on, pp.531, 538, 22-25 Oct. 2014. Abstract: Distributed and parallel applications are critical information technology systems in multiple industries, including academia, military, government, financial, medical, and transportation. These applications present target rich environments for malicious attackers seeking to disrupt the confidentiality, integrity and availability of these systems. Applying the military concept of defense cyber maneuver to these systems can provide protection and defense mechanisms that allow survivability and operational continuity. Understanding the tradeoffs between information systems security and operational performance when applying maneuver principles is of interest to administrators, users, and researchers. To this end, we present a model of a defensive maneuver cyber platform using Stochastic Petri Nets. This model enables the understanding and evaluation of the costs and benefits of maneuverability in a distributed application environment, specifically focusing on moving target defense and deceptive defense strategies.

Keywords: Petri nets; security of data; stochastic processes; deceptive defense strategies; defensive maneuver cyber platform modeling; information systems security; malicious attackers; moving target defense strategies; stochastic Petri nets; Control systems; Cyberspace; Military computing; Petri nets; Security; Standards; Stochastic processes (ID#: 15-3870) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014609&isnumber=7011734

 

Torrieri, D., "Cyber Maneuvers and Maneuver Keys," Military Communications Conference (MILCOM), 2014 IEEE, pp. 262, 267, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.48 This paper presents an overview of cyber maneuvers and their roles in cyber security. As the cyber war escalates, a strategy that preemptively limits and curtails attacks is required. Such a proactive strategy is called a cyber maneuver and is a refinement of the concept of a moving-target defense, which includes both reactive and proactive network changes. The major advantages of cyber maneuvers relative to other moving-target defenses are described. The use of maneuver keys in making cyber maneuvers much more feasible and affordable is explained. As specific examples, the applications of maneuver keys in encryption algorithms and as spread-spectrum keys are described. The integration of cyber maneuvers into a complete cyber security system with intrusion detection, identification of compromised nodes, and secure rekeying is presented. An example of secure rekeying despite the presence of compromised nodes is described.

Keywords: cryptography; cyber maneuvers; cyber security system; encryption algorithm; intrusion detection; maneuver keys; moving-target defenses; proactive network change; proactive strategy; reactive network change; secure rekeying; spread-spectrum key; Computer security; Encryption; Hardware; Intrusion detection; Jamming (ID#: 15-3871) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956769&isnumber=6956719

 

Tunc, C.; Fargo, F.; Al-Nashif, Y.; Hariri, S.; Hughes, J., "Autonomic Resilient Cloud Management (ARCM) Design and Evaluation," Cloud and Autonomic Computing (ICCAC), 2014 International Conference on, pp. 44, 49, 8-12 Sept. 2014. doi: 10.1109/ICCAC.2014.35 Cloud computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.

Keywords: cloud computing; security of data; software fault tolerance; ARCM;BO; autonomic resilient cloud management; cloud computing; cloud service behavior obfuscation; cloud system security; moving target defense; Cloud computing; Conferences; Autonomic Resilient Cloud Management; behavior obfuscation; resiliency (ID#: 15-3872) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024043&isnumber=7024029

 

Azab, M., "Multidimensional Diversity Employment for Software Behavior Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814033 Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.

Keywords: computational complexity; cryptography; multidimensional systems; software fault tolerance; system recovery; CBE mechanism; ChameleonSoft Behavior Encryption; ChameleonSoft recovery mechanisms; autonomic failure recovery mechanism; avalanche effect percentage; behaviorally-mutated functionally-equivalent code variants; computational cost; confusion levels; diffusion levels; moving target defense; multidimensional software diversity employment; online programmable software-execution foundation separating logic; security level; security provisioning; software monoculture; spatiotemporal software behavior encryption; system resilience; Employment; Encryption; Resilience; Runtime; Software; Spatiotemporal phenomena (ID#: 15-3873) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814033&isnumber=6813963

 

Hang Shao; Japkowicz, N.; Abielmona, R.; Falcon, R., "Vessel Track Correlation And Association Using Fuzzy Logic and Echo State Networks," Evolutionary Computation (CEC), 2014 IEEE Congress on, pp.2322,2329, 6-11 July 2014. doi: 10.1109/CEC.2014.6900231 Tracking moving objects is a task of the utmost importance to the defence community. As this task requires high accuracy, rather than employing a single detector, it has become common to use multiple ones. In such cases, the tracks produced by these detectors need to be correlated (if they belong to the same sensing modality) or associated (if they were produced by different sensing modalities). In this work, we introduce Computational-Intelligence-based methods for correlating and associating various contacts and tracks pertaining to maritime vessels in an area of interest. Fuzzy k-Nearest Neighbours will be used to conduct track correlation and Fuzzy C-Means clustering will be applied for association. In that way, the uncertainty of the track correlation and association is handled through fuzzy logic. To better model the state of the moving target, the traditional Kalman Filter will be extended using an Echo State Network. Experimental results on five different types of sensing systems will be discussed to justify the choices made in the development of our approach. In particular, we will demonstrate the judiciousness of using Fuzzy k-Nearest Neighbours and Fuzzy C-Means on our tracking system and show how the extension of the traditional Kalman Filter by a recurrent neural network is superior to its extension by other methods.

Keywords: Kalman filters; correlation methods; fuzzy logic; fuzzy set theory; marine vehicles; naval engineering computing; object tracking; pattern clustering; recurrent neural nets; Kalman filter; computational-intelligence-based methods; defense community; echo state networks; fuzzy c-means clustering; fuzzy k-nearest neighbours; fuzzy logic; maritime vessels; moving object tracking; recurrent neural network; sensing modality; vessel track association; vessel track correlation; Correlation; Mathematical model; Radar tracking; Recurrent neural networks; Sensors; Target tracking; Vectors; Computational Intelligence; Data Fusion; Defence and Security; Fuzzy Logic; Maritime Domain Awareness; Neural Networks; Track Association; Track Correlation (ID#: 15-3874) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900231&isnumber=6900223

 

Jian Wu; Yongmei Jiang; Gangyao Kuang; Jun Lu; Zhiyong Li, "Parameter Estimation For SAR Moving Target Detection using Fractional Fourier Transform," Geoscience and Remote Sensing Symposium (IGARSS), 2014 IEEE International, pp. 596, 599, 13-18 July 2014. doi: 10.1109/IGARSS.2014.6946493 This paper proposes an algorithm for multi-channel SAR ground moving target detection and estimation using the Fractional Fourier Transform(FrFT). To detect the moving target with low speed, the clutter is first suppressed by Displace Phase Center Antenna(DPCA), then the signal-to-clutter can be enhanced. Have suppressed the clutter, the echo of moving target remains and can be regarded as a chirp signal whose parameters can be estimated by FrFT. FrFT, one of the most widely used tools to time-frequency analysis, is utilized to estimate the Doppler parameters, from which the moving parameters, including the velocity and the acceleration can be obtained. The effectiveness of the proposed method is validated by the simulation.

Keywords: Doppler radar; Fourier transforms; geophysical techniques; parameter estimation; radar antennas; synthetic aperture radar; Doppler parameters; FrFT; SAR moving target detection; displace phase center antenna; fractional fourier transform; moving target echo; multichannel SAR ground algorithm; parameter estimation; target moving; time-frequency analysis; Acceleration; Apertures; Azimuth; Clutter; Doppler effect; Parameter estimation; Radar; Fractional Fourier Transform; GMTI; parameter estimation (ID#: 15-3875) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6946493&isnumber=6946328

 

Zhang Deping; Wang Quan; Wang Qingping; Wu WeiWei; Yuan NaiChang, "A Real Continuously Moving Target Simulation System Design Without Time Delay Error," Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on, pp.258, 261, 5-8 Aug. 2014. doi: 10.1109/ICSPCC.2014.6986194 The time delay of echo generated by the moving target simulator based on digital delay technique is discrete. So there are range and phase errors between the simulated target and real target, and the simulated target will move discontinuously due to the discrete time delay. In order to solve this problem and generate a continuously moving target, this paper uses signal processing technique to adjust the range and phase errors between the two targets. By adjusting the range gate, the time delay error is reduced to be smaller than sampling interval. According to the relationship between range and phase, the left error within one range bin can be removed equivalently by phase compensation. The simulation results show that by adjusting the range gate, the time delay errors are greatly reduced, and the left errors can be removed by phase compensation. In other words, a real continuously moving target is generated and the problem is solved.

Keywords: delays; echo; radar signal processing; continuously moving target simulation system design; digital delay technique; discrete time delay; echo time delay; phase compensation; radar moving target simulator; signal processing technique; time delay error; Delay effects; Delay lines; Laser radar; Logic gates; Radar antennas; Radar cross-sections ;moving target simulator; phase compensation; radar simulator; time delay error adjustment (ID#: 15-3876) 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6986194&isnumber=6986138


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Science of Security (2014 Year in Review)

 

 
SoS Logo

Science of Security

(2014 Year in Review)

Many more articles and research studies are appearing with “Science of Security” as a keyword.  In 2014, the number has grown substantially.  A scan of the IEEE revealed almost 800 articles listing science of security as a key word.  The list is misleading, however, as a number of the citations are using different definitions.  The work cited here is a year-end compendium of 2014 articles deemed relevant to the Science of Security community by the editors.

 

Campbell, S., "Open Science, Open Security," High Performance Computing & Simulation (HPCS), 2014 International Conference on, pp.584,587, 21-25 July 2014. doi: 10.1109/HPCSim.2014.6903739  We propose that to address the growing problems with complexity and data volumes in HPC security wee need to refactor how we look at data by creating tools that not only select data, but analyze and represent it in a manner well suited for intuitive analysis. We propose a set of rules describing what this means, and provide a number of production quality tools that represent our current best effort in implementing these ideas.

Keywords: data analysis; parallel processing; security of data; HPC security; data analysis; data representation; data selection; high performance computing; open science; open security; production quality tools; Buildings; Computer architecture; Filtering; Linux; Materials; Production; Security; High Performance Computing; Intrusion Detection; Security  (ID#:15-3419)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903739&isnumber=6903651

 

McDaniel, P.; Rivera, B.; Swami, A., "Toward a Science of Secure Environments," Security & Privacy, IEEE, vol. 12, no. 4, pp. 68, 70, July-Aug. 2014. doi: 10.1109/MSP.2014.81The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.

Keywords: decision making; optimisation; security of data; agile systems; continuous optimization problem; cyber environment; cyber security collaborative research alliance; cyber-decision-making; detection mechanisms; environmental security; fundamental science; network security; secure environments; software security; Approximation methods; Communities; Computational modeling; Computer security; Decision making; formal security; modeling; science of security; security; systems security (ID#:15-3420)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876248&isnumber=6876237

 

Srivastava, M., "In Sensors We Trust -- A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.

Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#:15-3421)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129

 

Uddin, M.P.; Abu Marjan, M.; Binte Sadia, N.; Islam, M.R., "Developing a Cryptographic Algorithm Based On ASCII Conversions And A Cyclic Mathematical Function," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on, pp.1,5, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850691 Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.

Keywords: cryptography; encoding; matrix algebra; ASCII conversions; ASCII value; binary matrices; computer science; cryptographic algorithm; cyclic mathematical function; data decryption; data encryption; unprintable encrypted message; unprintable text; Algorithm design and analysis; Computer science; Encryption; Informatics; Information security; ASCII Conversion; Cryptography; Encryption and Decryption; Higher Level of Security; Unprintable Encrypted Message  (ID#:15-3422)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850691&isnumber=6850678

 

Pal, S.K.; Sardana, P.; Sardana, A., "Efficient search on encrypted data using bloom filter," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.412,416, 5-7 March 2014

doi: 10.1109/IndiaCom.2014.6828170

Abstract: Efficient and secure search on encrypted data is an important problem in computer science. Users having large amount of data or information in multiple documents face problems with their storage and security. Cloud services have also become popular due to reduction in cost of storage and flexibility of use. But there is risk of data loss, misuse and theft. Reliability and security of data stored in the cloud is a matter of concern, specifically for critical applications and ones for which security and privacy of the data is important. Cryptographic techniques provide solutions for preserving the confidentiality of data but make the data unusable for many applications. In this paper we report a novel approach to securely store the data on a remote location and perform search in constant time without the need for decryption of documents. We use bloom filters to perform simple as well advanced search operations like case sensitive search, sentence search and approximate search.

 keywords: {cloud computing;cost reduction;cryptography;data structures;document handling;information retrieval;Bloom filter;approximate search;case sensitive search;cloud services;computer science;cryptographic techniques;data loss;data misuse;data theft;document decryption;efficient encrypted data search;search operations;sentence search;storage cost reduction;Cloud computing;Cryptography;Filtering algorithms;Indexes;Information filters;Servers;Approximate Search and Bloom Filter;Cloud Computing;Encrypted Search},  (ID#:15-3423)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828170&isnumber=6827395

 

Jiankun Hu; Pota, H.R.; Song Guo, "Taxonomy of Attacks for Agent-Based Smart Grids," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.7, pp.1886,1895, July 2014

doi: 10.1109/TPDS.2013.301

Abstract: Being the most important critical infrastructure in Cyber-Physical Systems (CPSs), a smart grid exhibits the complicated nature of large scale, distributed, and dynamic environment. Taxonomy of attacks is an effective tool in systematically classifying attacks and it has been placed as a top research topic in CPS by a National Science Foundation (NSG) Workshop. Most existing taxonomy of attacks in CPS are inadequate in addressing the tight coupling of cyber-physical process or/and lack systematical construction. This paper attempts to introduce taxonomy of attacks of agent-based smart grids as an effective tool to provide a structured framework. The proposed idea of introducing the structure of space-time and information flow direction, security feature, and cyber-physical causality is innovative, and it can establish a taxonomy design mechanism that can systematically construct the taxonomy of cyber attacks, which could have a potential impact on the normal operation of the agent-based smart grids. Based on the cyber-physical relationship revealed in the taxonomy, a concrete physical process based cyber attack detection scheme has been proposed. A numerical illustrative example has been provided to validate the proposed physical process based cyber detection scheme.

 keywords: {grid computing;security of data;software agents;National Science Foundation Workshop;agent-based smart grids;attack classification;critical infrastructure;cyber attack detection scheme;cyber detection scheme;cyber-physical causality;cyber-physical process;cyber-physical systems;distributed environment;dynamic environment;information flow direction;large scale environment;security feature;taxonomy of attacks;Equations;Generators;Load modeling;Mathematical model;Security;Smart grids;Taxonomy;Cyber Physical Systems (CPS);agents;critical infrastructure;power systems;security;smart grid;taxonomy},  (ID#:15-3424)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6678518&isnumber=6828815

 

Fink, G.A.; Griswold, R.L.; Beech, Z.W., "Quantifying Cyber-Resilience Against Resource-Exhaustion Attacks," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900093 Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.

Keywords: queueing theory; security of data; cyber security; cyber-resilience quantification; engineering terms; information sciences; linguistic terms; mechanical engineering; networked queuing systems; queuing theory; resource-exhaustion attacks; simple queuing system; stress-strain curve; Information systems; Queueing analysis; Resilience; Servers; Strain; Stress; Resilience; cyber systems; information science; material science; strain; stress  (ID#:15-3425)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900093&isnumber=6900080

 

Stanisavljevic, Z.; Stanisavljevic, J.; Vuletic, P.; Jovanovic, Z., "COALA - System for Visual Representation of Cryptography Algorithms," Learning Technologies, IEEE Transactions on , vol.7, no.2, pp.178,190, April-June 1 2014. doi: 10.1109/TLT.2014.2315992 Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.

 keywords: {computer aided instruction;computer science education;cryptography;data visualisation;educational courses;educational institutions;further education;AES algorithm;COALA system;DES algorithm;Diffie-Hellman algorithm;RSA algorithm;School of Electrical Engineering;University of Belgrade;cryptographic algorithm visual representation;cryptography algorithms;data security course;educational institutions;educational software systems;engineering sciences;student attitudes;student knowledge acquisition;Algorithm design and analysis;Cryptography;Data visualization;Software algorithms;Visualization;AES;DES;Diffie-Hellman;RSA;algorithm visualization;cryptographic algorithms;data security;security education},  (ID#:15-3426)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784486&isnumber=6847757

 

Kadhim, Hakem Adil; AbdulRashidx, NurAini, "Maximum-shift string matching algorithms," Computer and Information Sciences (ICCOINS), 2014 International Conference on , vol., no., pp.1,6, 3-5 June 2014

doi: 10.1109/ICCOINS.2014.6868423 The string matching algorithms have broad applications in many areas of computer sciences. These areas include operating systems, information retrieval, editors, Internet searching engines, security applications and biological applications. Two important factors used to evaluate the performance of the sequential string matching algorithms are number of attempts and total number of character comparisons during the matching process. This research proposes to integrate the good properties of three single string matching algorithms, Quick-Search, Zuh-Takaoka and Horspool, to produce hybrid string matching algorithm called Maximum-Shift algorithm. Three datasets are used to test the proposed algorithm, which are, DNA, Protein sequence and English text. The hybrid algorithm, Maximum-Shift, shows efficient results compared to four string matching algorithms, Quick-Search, Horspool, Smith and Berry-Ravindran, in terms of the number of attempts and the total number of character comparisons.

Keywords: Arabic String Matching Systems; Horspool; Hybrid String Matching; Quick-Search; Zuh Takaoka},  (ID#:15-3427)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868423&isnumber=6868339

 

n.a,  “Asymmetrical Quantum Encryption Protocol Based On Quantum Search Algorithm," Communications, China, vol. 11, no.  9, pp. 104, 111, Sept. 2014. Quantum cryptography and quantum search algorithm are considered as two important research topics in quantum information science. An asymmetrical quantum encryption protocol based on the properties of quantum one-way function and quantum search algorithm is proposed. Depending on the no-cloning theorem and trapdoor one-way functions of the public-key, the eavesdropper cannot extract any private-information from the public-keys and the ciphertext. Introducing key-generation randomized logarithm to improve security of our proposed protocol, i.e., one private-key corresponds to an exponential number of public-keys. Using unitary operations and the single photon measurement, secret messages can be directly sent from the sender to the receiver. The security of the proposed protocol is proved that it is information-theoretically secure. Furthermore, compared the symmetrical Quantum key distribution, the proposed protocol is not only efficient to reduce additional communication, but also easier to carry out in practice, because no entangled photons and complex operations are required.

Keywords: asymmetrical encryption; information-theoretical security; quantum cryptography; quantum search algorithms  (ID#:15-3428)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969775&isnumber=6969702

 

Shukla, S.; Sadashivappa, G., "Secure multi-party computation protocol using asymmetric encryption," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.780,785, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828069 Privacy preservation is very essential in various real life applications such as medical science and financial analysis. This paper focuses on implementation of an asymmetric secure multi-party computation protocol using anonymization and public-key encryption where all parties have access to trusted third party (TTP) who (1) doesn't add any contribution to computation (2) doesn't know who is the owner of the input received (3) has large number of resources (4) decryption key is known to trusted third party (TTP) to get the actual input for computation of final result. In this environment, concern is to design a protocol which deploys TTP for computation. It is proposed that the protocol is very proficient (in terms of secure computation and individual privacy) for the parties than the other available protocols. The solution incorporates protocol using asymmetric encryption scheme where any party can encrypt a message with the public key but decryption can be done by only the possessor of the decryption key (private key). As the protocol works on asymmetric encryption and packetization it ensures following: (1) Confidentiality (Anonymity) (2) Security (3) Privacy (Data).

Keywords: cryptographic protocols; data privacy; private key cryptography; public key cryptography; TTP; anonymity; anonymization; asymmetric encryption scheme; asymmetric secure multiparty computation protocol; confidentiality; decryption key; financial analysis; individual privacy; medical science; message encryption ;packetization; privacy preservation; private key; protocol design; public-key encryption; security; trusted third party; Data privacy; Encryption; Joints; Protocols; Public key; Anonymization; Asymmetric Encryption; Privacy; Secure Multi-Party Computation (SMC); Security; trusted third party (TTP) (ID#:15-3429)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828069&isnumber=6827395

 

Lesk, M., "Staffing for Security: Don't Optimize," Security & Privacy, IEEE, vol.12, no.4, pp.71, 73, July-Aug. 2014. doi: 10.1109/MSP.2014.78 Security threats are irregular, sometimes very sophisticated, and difficult to measure in an economic sense. Much published data about them comes from either anecdotes or surveys and is often either not quantified or not quantified in a way that's comparable across organizations. It's hard even to separate the increase in actual danger from year to year from the increase in the perception of danger from year to year. Staffing to meet these threats is still more a matter of judgment than science, and in particular, optimizing staff allocation will likely leave your organization vulnerable at the worst times.

Keywords: personnel; security of data; IT security employees; data security; staff allocation optimization; Computer security; Economics; Organizations; Privacy; Software development; botnets; economics; security; security threats; staffing  (ID#:15-3430)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876258&isnumber=6876237

 

Han, Lansheng; Qian, Mengxiao; Xu, Xingbo; Fu, Cai; Kwisaba, Hamza, "Malicious code Detection Model Based On Behavior Association," Tsinghua Science and Technology, vol.19, no.5, pp.508, 515, Oct. 2014. doi: 10.1109/TST.2014.6919827 Malicious applications can be introduced to attack users and services so as to gain financial rewards, individuals' sensitive information, company and government intellectual property, and to gain remote control of systems. However, traditional methods of malicious code detection, such as signature detection, behavior detection, virtual machine detection, and heuristic detection, have various weaknesses which make them unreliable. This paper presents the existing technologies of malicious code detection and a malicious code detection model is proposed based on behavior association. The behavior points of malicious code are first extracted through API monitoring technology and integrated into the behavior; then a relation between behaviors is established according to data dependence. Next, a behavior association model is built up and a discrimination method is put forth using pushdown automation. Finally, the exact malicious code is taken as a sample to carry out an experiment on the behavior's capture, association, and discrimination, thus proving that the theoretical model is viable.

Keywords: Automation; Computers; Grammar; Monitoring; Trojan horses; Virtual machining; behavior association; behavior monitor; malicious code; pushdown automation  (ID#:15-3431)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919827&isnumber=6919815

 

 

Huang, X.; Xiang, Y.; Bertino, E.; Zhou, J.; Xu, L., "Robust Multi-Factor Authentication for Fragile Communications," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no. 6, pp.568, 581, Nov.-Dec. 2014. doi: 10.1109/TDSC.2013.2297110 In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.

Keywords: Authentication; Biometrics (access control); Digital signatures; Protocols; Servers; Telecommunication services; Authentication; efficiency; multi-factor; privacy; stand-alone  (ID#:15-3432)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6701152&isnumber=6949762

 

Jahanirad, Mehdi; Abdul Wahab, Ainuddin Wahid; Anuar, Nor Badrul; Idna Idris, Mohd Yamani; Ayub, Mohamad Nizam, "Blind Identification Of Source Mobile Devices Using Voip Calls," Region 10 Symposium, 2014 IEEE, pp.486,491, 14-16 April 2014. doi: 10.1109/TENCONSpring.2014.6863082 Sources such as speakers and environments from different communication devices produce signal variations that result in interference generated by different communication devices. Despite these convolutions, signal variations produced by different mobile devices leave intrinsic fingerprints on recorded calls, thus allowing the tracking of the models and brands of engaged mobile devices. This study aims to investigate the use of recorded Voice over Internet Protocol calls in the blind identification of source mobile devices. The proposed scheme employs a combination of entropy and mel-frequency cepstrum coefficients to extract the intrinsic features of mobile devices and analyzes these features with a multi-class support vector machine classifier. The experimental results lead to an accurate identification of 10 source mobile devices with an average accuracy of 99.72%.

Keywords: Pattern recognition; device-based detection technique; entropy; mel-frequency cepstrum coefficients  (ID#:15-3433)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863082&isnumber=6862973

 

Ajish, S.; Rajasree, R., "Secure Mail using Visual Cryptography (SMVC)," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp.1,7, 11-13 July 2014.

doi: 10.1109/ICCCNT.2014.6963148 The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the receipent's mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.

Keywords: Electronic mail; Encryption; Heuristic algorithms; Receivers; Visualization; Wavelet transforms; chaos based image encryption algorithm; dynamic s-box algorithm; low frequency wavelet coefficient; pretty good privacy; visual cryptography; wavelet decomposition  (ID#:15-3434)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963148&isnumber=6962988

 

Veugen, T.; de Haan, R.; Cramer, R.; Muller, F., "A Framework For Secure Computations With Two Non-Colluding Servers And Multiple Clients, Applied To Recommendations," Information Forensics and Security, IEEE Transactions on, vol. PP, no.99, pp.1, 1, 13 November 2014. doi: 10.1109/TIFS.2014.2370255 We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.

Keywords: Authentication; Computational modeling; Cryptography; Protocols; Recommender systems; Servers  (ID#:15-3435)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6955802&isnumber=4358835

 

Schneider, S.; Lansing, J.; Fangjian Gao; Sunyaev, A., "A Taxonomic Perspective on Certification Schemes: Development of a Taxonomy for Cloud Service Certification Criteria," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4998, 5007, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.614  Numerous cloud service certifications (CSCs) are emerging in practice. However, in their striving to establish the market standard, CSC initiatives proceed independently, resulting in a disparate collection of CSCs that are predominantly proprietary, based on various standards, and differ in terms of scope, audit process, and underlying certification schemes. Although literature suggests that a certification's design influences its effectiveness, research on CSC design is lacking and there are no commonly agreed structural characteristics of CSCs. Informed by data from 13 expert interviews and 7 cloud computing standards, this paper delineates and structures CSC knowledge by developing a taxonomy for criteria to be assessed in a CSC. The taxonomy consists of 6 dimensions with 28 subordinate characteristics and classifies 328 criteria, thereby building foundations for future research to systematically develop and investigate the efficacy of CSC designs as well as providing a knowledge base for certifiers, cloud providers, and users.

Keywords: certification; cloud computing; CSC design; CSC initiatives; audit process; certification schemes; certifiers; cloud computing standards; cloud providers; cloud service certification criteria; structural characteristics; taxonomic perspective; taxonomy; Business; Certification; Cloud computing; Interviews; Security; Standards; Taxonomy; Certification; Cloud Computing; Taxonomy  (ID#:15-3436)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759217&isnumber=6758592

 

Vijayakumar, R.; Selvakumar, K.; Kulothungan, K.; Kannan, A., "Prevention of Multiple Spoofing Attacks With Dynamic MAC Address Allocation For Wireless Networks," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.1635,1639, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6950125 In wireless networks, spoofing attack is one of the most common and challenging attacks. Due to these attacks the overall network performance would be degraded. In this paper, a medoid based clustering approach has been proposed to detect a multiple spoofing attacks in wireless networks. In addition, a Enhanced Partitioning Around Medoid (EPAM) with average silhouette has been integrated with the clustering mechanism to detect a multiple spoofing attacks with a higher accuracy rate. Based on the proposed method, the received signal strength based clustering approach has been adopted for medoid clustering for detection of attacks. In order to prevent the multiple spoofing attacks, dynamic MAC address allocation scheme using MD5 hashing technique is implemented. The experimental results shows, the proposed method can detect spoofing attacks with high accuracy rate and prevent the attacks. Thus the overall network performance is improved with high accuracy rate.

Keywords: Accuracy; Broadcasting; Cryptography; Electronic mail; Hardware; Monitoring; Wireless communication; Attacks Detection and Prevention; Dynamic MAC Address allocation; MAC Spoofing attacks; Wireless Network Security  (ID#:15-3437)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950125&isnumber=6949766

 

Sihan Qing, "Some Issues Regarding Operating System Security," Computer and Information Science (ICIS), 2014 IEEE/ACIS 13th International Conference on, pp.1,1, 4-6 June 2014. doi: 10.1109/ICIS.2014.6912096 Summary form only given. In this presentation, several issues regarding operating system security will be investigated. The general problems of OS security are to be addressed. We also discuss why we should consider the security aspects of the OS, and when a secure OS is needed. We delve into the topic of secure OS design as well focusing on covert channel analysis. The specific operating systems under consideration include Windows and Android.

 Keywords: Android (operating system);security of data; software engineering; Android; Windows; covert channel analysis; operating system security; secure OS design; Abstracts; Focusing; Information security; Laboratories; Operating systems; Standards development  (ID#:15-3438)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912096&isnumber=6912089

 

Manning, F.J.; Mitropoulos, F.J., "Utilizing Attack Graphs to Measure the Efficacy of Security Frameworks across Multiple Applications," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4915,4920, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.602 One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.

Keywords: graph theory; security of data; attack graph analysis; attack surface; security frameworks; Authentication; Information security; Measurement; Servers; Software; Vectors; Attack graphs; information security; measurement  (ID#:15-3439)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759205&isnumber=6758592

 

Ma, J.; Zhang, T.; Dong, M., "A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition with Security Guarantee in e-Health Applications," Biomedical and Health Informatics, IEEE Journal  of, vol. PP, no. 99, pp.1,1, 12 September 2014. doi: 10.1109/JBHI.2014.2357841 This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting adaptive Fourier decomposition (AFD) algorithm hybridized with symbol substitution (SS) technique. The compression consists of two stages: 1st stage AFD executes efficient lossy compression with high fidelity; 2nd stage SS performs lossless compression enhancement and built-in data encryption which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6 to 44.5 and percentage root mean square difference (PRD) of 0.8% to 2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this work provides an attractive candidate of ECG compression method for pervasive e-health applications.

Keywords: Benchmark testing; Electrocardiography; Encoding; Encryption; Informatics; Information security; Transforms  (ID#:15-3440)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897915&isnumber=6363502

 

Song Li; Qian Zou; Wei Huang, "A New Type Of Intrusion Prevention System," Information Science, Electronics and Electrical Engineering (ISEEE), 2014 International Conference on, vol. 1, no., pp.361, 364, 26-28 April 2014. doi: 10.1109/InfoSEEE.2014.6948132 In order to strengthen network security and improve the network's active defense intrusion detection capabilities, this paper presented and established one active defense intrusion detection system which based on the mixed interactive honeypot. The system can help to reduce the false information, enhance the stability and security of the network. Testing and simulation experiments show that: the system improved active defense of the network's security, increase the honeypot decoy capability and strengthen the attack predictive ability. So it has better application and promotion value.

Keywords: computer network security; active defense intrusion detection system; intrusion prevention system; mixed interactive honeypot; network security; Communication networks ;Computer hacking; Logic gates; Monitoring; Operating systems; Servers; Defense; Interaction honeypot; Intrusion detection; network security  (ID#:15-3441)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6948132&isnumber=6948054

 

Al Barghuthi, N.B.; Said, H., "Ethics Behind Cyber Warfare: A Study Of Arab Citizens Awareness," Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on, pp.1,7, 23-24 May 2014. doi: 10.1109/ETHICS.2014.6893402 Persisting to ignore the consequences of Cyber Warfare will bring severe concerns to all people. Hackers and governments alike should understand the barriers of which their methods take them. Governments use Cyber Warfare to give them a tactical advantage over other countries, defend themselves from their enemies or to inflict damage upon their adversaries. Hackers use Cyber Warfare to gain personal information, commit crimes, or to reveal sensitive and beneficial intelligence. Although both methods can provide ethical uses, the equivalent can be said at the other end of the spectrum. Knowing and comprehending these devices will not only strengthen the ability to detect these attacks and combat against them but will also provide means to divulge despotic government plans, as the outcome of Cyber Warfare can be worse than the outcome of conventional warfare. The paper discussed the concept of ethics and reasons that led to use information technology in military war, the effects of using cyber war on civilians, the legality of the cyber war and ways of controlling the use of information technology that may be used against civilians. This research uses a survey methodology to overlook the awareness of Arab citizens towards the idea of cyber war, provide findings and evidences of ethics behind the offensive cyber warfare. Detailed strategies and approaches should be developed in this aspect. The author recommended urging the scientific and technological research centers to improve the security and develop defending systems to prevent the use of technology in military war against civilians.

Keywords: computer crime; ethical aspects; government data processing; Arab citizens awareness; cyber war; cyber warfare; despotic government plans; information technology; military war; personal information; scientific research centers; security systems; technological research centers; Computer hacking; Computers; Ethics; Government; Information technology; Law; Military computing; cyber army; cyber attack; cyber security; cyber warfare; defense; ethics; offence  (ID#:15-3442)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6893402&isnumber=6893372

 

Oweis, N.E.; Owais, S.S.; Alrababa, M.A.; Alansari, M.; Oweis, W.G., "A Survey Of Internet Security Risk Over Social Networks," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.1, 4, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805970 The Communities vary from country to country. There are civil societies and rural communities, which also differ in terms of geography climate and economy. This shows that the use of social networks vary from region to region depending on the demographics of the communities. So, in this paper, we researched the most important problems of the Social Network, as well as the risk which is based on the human elements. We raised the problems of social networks in the transformation of societies to another affected by the global economy. The social networking integration needs to strengthen social ties that lead to the existence of these problems. For this we focused on the Internet security risks over the social networks. And study on Risk Management, and then look at resolving various problems that occur from the use of social networks.

Keywords: Internet; risk management; security of data; social networking (online);Internet security risk; civil society; geography climate; global economy; risk management; rural community; social networking integration; social networks; Communities; Computers; Educational institutions; Internet; Organizations; Security; Social network services; Internet risks; crimes social networking; dangers to society; hackers; social network; social risks  (ID#:15-3443)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805970&isnumber=6805962

 

Kumar, S.; Rama Krishna, C.; Aggarwal, N.; Sehgal, R.; Chamotra, S., "Malicious Data Classification Using Structural Information And Behavioral Specifications In Executables," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp.1,6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799525 With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.

Keywords: Internet; invasive software; pattern classification; program diagnostics; NtTrace; antivirus; automated malicious programs; behavioral specifications; dynamic analysis; executable file; information systems; malicious behavior extraction; malicious data classification; malicious software detection; malicious software mitigation; malware detection system effectiveness improvement; malware detection system efficiency improvement; malwares; obfuscation technology; security devices; signature database; signature-based detection system; static analysis; structural information; threat detection; threat mitigation; underground Internet economy; zero-day malware detection; Algorithm design and analysis; Classification algorithms; Feature extraction; Internet; Malware; Software; Syntactics; behavioral specifications; classification algorithms; dynamic analysis; malware detection; static analysis; system calls  (ID#:15-3444)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799525&isnumber=6799496


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Situational Awareness and Security - Part 1

 

 
SoS Newsletter Logo

Situational Awareness & Security
Part 1

 

Situational awareness is an important human factor for cyber security. The works cited here cover specific problems.  In April 2014, IEEE published a Special Issue on Signal Processing for Situational Awareness from Networked Sensors and Social Media.  That material is available at: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6757015&punumber=78    The publications cited here are from other sources.  

 

Voigt, S.; Schoepfer, E.; Fourie, C.; Mager, A., "Towards Semi-Automated Satellite Mapping For Humanitarian Situational Awareness," Global Humanitarian Technology Conference (GHTC), 2014 IEEE, pp.412,416, 10-13 Oct. 2014. doi: 10.1109/GHTC.2014.6970315 Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.

Keywords: feature extraction; geophysical techniques; image segmentation; satellite communication; COPERNICUS; European Earth observation program; European Union seventh framework program; displaced persons camps; humanitarian mapping; humanitarian settings; humanitarian situational awareness; mapping services; object-based image analysis; operational emergency mapping tasks; refugee camps; routine image analysis; satellite data; satellite imagery; satellite pass-over times; satellite systems; semiautomated feature extraction; semiautomated satellite mapping; supervised segmentation; visual analysis; Image analysis; Image segmentation; Monitoring; Optical imaging; Robustness; Satellites; Visualization; humanitarian situational awareness; monitoring; satellite mapping  (ID#: 15-3808)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970315&isnumber=6970242

 

Del Rosso, A.; Liang Min; Chaoyang Jing, "High Performance Computation Tools For Real-Time Security Assessment," PES General Meeting | Conference & Exposition, 2014 IEEE, pp. 1, 1, 27-31 July 2014. doi: 10.1109/PESGM.2014.6939091 This paper presents an overview of the research project “High-Performance Hybrid Simulation/Measurement-Based Tools for Proactive Operator Decision-Support”, performed under the auspices of the U.S. Department of Energy grant DE-OE0000628. The objective of this project is to develop software tools to provide enhanced real-time situational awareness to support the decision making and system control actions of transmission operators. The integrated tool will combine high-performance dynamic simulation with synchrophasor measurement data to assess in real time system dynamic performance and operation security risk. The project includes: (i) The development of high-performance dynamic simulation software; (ii) the development of new computationally effective measurement-based tools to estimate operating margins of a power system in real time using measurement data from synchrophasors and SCADA; (iii) the development a hybrid framework integrating measurement-based and simulation-based approaches, and (iv) the use of cutting-edge visualization technology to display various system quantities and to visually process the results of the hybrid measurement-base/simulation-based security-assessment tool. Parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using MPI-based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the I/O bottleneck have been also exprored. Thread-parallelization of the spa- se linear solve is explored also through use of the SuperLU_MT library. Based on performance profiling results for the implicit method, the majority of CPU time is spent on the integration steps. Hence, in order to further improve the ETMSP performance, a variable time step control scheme for the original trapezoidal integration method has been developed and implemented. The Adams-Bashforth-Moulton predictor-corrector method was introduced and designed for ETMSP. Test results show superior performance with this method.

Keywords: SCADA systems; computer software; data visualisation; decision making; integration; phasor measurement; power engineering computing; power system security; power system transient stability; power transmission control; predictor-corrector methods; Adams-Bashforth-Moulton predictor-corrector method; CPU time; DE-OE0000628;EPRI; ETMSP performance; MPI-based parallelization; SCADA; SuperLU_MT library; U.S. Department of Energy grant; computation tools; computing machine; contingency analysis; decision making; dynamic simulation software; extended transient midterm simulation program; hybrid measurement-based-simulation-based security-assessment tool; hybrid simulation-measurement-based tools; operation security risk; power system; proactive operator decision-support; security assessment; software tools; sparse linear solver; synchrophasor measurement data; system control actions; thread-parallelization transient stability analysis ;transmission operators; trapezoidal integration method; variable time step control scheme; visualization technology; Computational modeling; Hybrid power systems; Power measurement; Power system dynamics; Real-time systems; Software measurement; Time measurement  (ID#: 15-3809)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6939091&isnumber=6938773

 

Bhandari, P.; Gujral, M.S., "Ontology Based Approach For Perception Of Network Security State," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp.1,6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799584 This paper presents an ontological approach to perceive the current security status of the network. Computer network is a dynamic entity whose state changes with the introduction of new services, installation of new network operating system, and addition of new hardware components, creation of new user roles and by attacks from various actors instigated by aggressors. Various security mechanisms employed in the network does not give the complete picture of security of complete network. In this paper we have proposed taxonomy and ontology which may be used to infer impact of various events happening in the network on security status of the network. Vulnerability, Network and Attack are the main taxonomy classes in the ontology. Vulnerability class describes various types of vulnerabilities in the network which may in hardware components like storage devices, computing devices or networks devices. Attack class has many subclasses like Actor class which is entity executing the attack, Goal class describes goal of the attack, Attack mechanism class defines attack methodology, Scope class describes size and utility of the target, Automation level describes the automation level of the attack Evaluation of security status of the network is required for network security situational awareness. Network class has network operating system, users, roles, hardware components and services as its subclasses. Based on this taxonomy ontology has been developed to perceive network security status. Finally a framework, which uses this ontology as knowledgebase has been proposed.

Keywords: computer network security; network operating systems; ontologies (artificial intelligence);computer network security; network operating system; ontology; taxonomy classes; Automation; Computer networks; Hardware; Manuals; Ontologies; Security; Taxonomy; Network Security Status; Network Situational awareness; Ontology; Taxonomy  (ID#: 15-3810)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799584&isnumber=6799496

 

Refaei, M.T.; Bush, J., "Secure Reliable Group Communication for Tactical Networks," Military Communications Conference (MILCOM), 2014 IEEE, pp. 1195, 1200, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.200 Tactical communication networks lack infrastructure and are highly dynamic, resource-constrained, and commonly targeted by adversaries. Designing efficient and secure applications for this environment is extremely challenging. An increasing reliance on group-oriented, tactical applications such as chat, situational awareness, and real-time video has generated renewed interest in IP multicast delivery. However, a lack of developer tools, software libraries, and standard paradigms to achieve secure and reliable multicast impedes the potential of group-oriented communication and often leads to inefficient communication models. In this paper, we propose an architecture for secure and reliable group-oriented communication. The architecture utilizes NSA Suite B cryptography and may be appropriate for handling sensitive and DoD classified data up to SECRET. Our proposed architecture is unique in that it requires no infrastructure, follows NSA CSfC guidance for layered security, and leverages NORM for multicast data reliability. We introduce each component of the architecture and describe a Linux-based software prototype.

Keywords: computer network reliability; computer network security; cryptography; military communication; military computing; NSA Suite B cryptography; SECRET; group oriented communication; reliable group communication; secure group communication; tactical communication; tactical networks; Authentication; Computer architecture; Encryption; Protocols; Reliability; multicast; norm; suite-b  (ID#: 15-3811)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956920&isnumber=6956719

 

Amin, S.; Clark, T.; Offutt, R.; Serenko, K., "Design of a Cyber Security Framework for ADS-B Based Surveillance Systems," Systems and Information Engineering Design Symposium (SIEDS), 2014, pp. 304, 309, 25-25 April 2014. doi: 10.1109/SIEDS.2014.6829910 The need for increased surveillance due to increase in flight volume in remote or oceanic regions outside the range of traditional radar coverage has been fulfilled by the advent of space-based Automatic Dependent Surveillance - Broadcast (ADS-B) Surveillance systems. ADS-B systems have the capability of providing air traffic controllers with highly accurate real-time flight data. ADS-B is dependent on digital communications between aircraft and ground stations of the air route traffic control center (ARTCC); however these communications are not secured. Anyone with the appropriate capabilities and equipment can interrogate the signal and transmit their own false data; this is known as spoofing. The possibility of this type of attacks decreases the situational awareness of United States airspace. The purpose of this project is to design a secure transmission framework that prevents ADS-B signals from being spoofed. Three alternative methods of securing ADS-B signals are evaluated: hashing, symmetric encryption, and asymmetric encryption. Security strength of the design alternatives is determined from research. Feasibility criteria are determined by comparative analysis of alternatives. Economic implications and possible collision risk is determined from simulations that model the United State airspace over the Gulf of Mexico and part of the airspace under attack respectively. The ultimate goal of the project is to show that if ADS-B signals can be secured, the situational awareness can improve and the ARTCC can use information from this surveillance system to decrease the separation between aircraft and ultimately maximize the use of the United States airspace.

Keywords: aircraft; cryptography; digital communication; radar; security of data; surveillance; ADS-B based surveillance systems; ADS-B signals; ADS-B surveillance systems; ARTCC; United State airspace; United States airspace; air route traffic control center; air traffic controllers; aircraft; asymmetric encryption; collision risk; cyber security framework design; digital communications; economic implications; ground stations; hashing; radar coverage; real-time flight data; secure transmission framework design; space-based automatic dependent surveillance-broadcast; Air traffic control; Aircraft; Atmospheric modeling; Encryption; FAA; Radar;  Surveillance  (ID#: 15-3812)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6829910&isnumber=6829868

 

Kotenko, I.; Novikova, E., "Visualization of Security Metrics for Cyber Situation Awareness," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, pp.506,513, 8-12 Sept. 2014. doi: 10.1109/ARES.2014.75 One of the important direction of research in situational awareness is implementation of visual analytics techniques which can be efficiently applied when working with big security data in critical operational domains. The paper considers a visual analytics technique for displaying a set of security metrics used to assess overall network security status and evaluate the efficiency of protection mechanisms. The technique can assist in solving such security tasks which are important for security information and event management (SIEM) systems. The approach suggested is suitable for displaying security metrics of large networks and support historical analysis of the data. To demonstrate and evaluate the usefulness of the proposed technique we implemented a use case corresponding to the Olympic Games scenario.

Keywords: Big Data; computer network security; data analysis; data visualisation; Olympic Games scenario; SIEM systems; big data security; cyber situation awareness; network security status; security information and event management systems; security metric visualization; visual analytics technique;Abstracts;Availability;Layout;Measurement;Security;Visualization; cyber situation awareness; high level metrics visualization; network security level assessment; security information visualization  (ID#: 15-3813)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980325&isnumber=6980232

 

Han Huang; Jun Zhang; Guanglong Xie, "Research on the Future Functions And Modality Of Smart Grid And Its Key Technologies," Electricity Distribution (CICED), 2014 China International Conference on,  pp.1241,1245, 23-26 Sept. 2014. doi: 10.1109/CICED.2014.6991905 Power network is important part of national comprehensive energy resources transmission system in the way of energy security promise and the economy society running. Meanwhile, because of many industries involved, the development of grid can push national innovation ability. Nowadays, it makes the inner of smart grid flourish that material science, computer technique and information and communication technology go forward. This paper researches the function and modality of smart grid on energy, geography and technology dimensions. The analysis on the technology dimension is addressed on two aspects which are network control and interaction with customer. The mapping relationship between functions fo smart grid and eight key technologies, which are Large-capacity flexible transmission technology, DC power distribution technology, Distributed power generation technology, Large-scale energy storage technology, Real-time tracking simulation technology, Intelligent electricity application technology, The big data analysis and cloud computing technology, Wide-area situational awareness technology, is given. The research emphasis of the key technologies is proposed.

Keywords: Big Data; cloud computing; distributed power generation ;energy security; energy storage; flexible AC transmission systems; power engineering computing; smart power grids; DC power distribution technology ;Large-scale energy storage technology; big data analysis; cloud computing technology; distributed power generation; energy resource transmission system; energy security; geography dimension; intelligent electricity application technology; large-capacity flexible transmission technology; power network control ;real-time tracking simulation technology; smart grid modality; wide-area situational awareness technology; Abstracts; Batteries; Electricity ;Integrated circuit interconnections; Natural gas; Reliability; Smart grids; development; function and state; key technology; smart grid  (ID#: 15-3814)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6991905&isnumber=6991649

 

Jandel, M.; Svenson, P.; Johansson, R., "Fusing Restricted Information," Information Fusion (FUSION), 2014 17th International Conference on, pp. 1, 9, 7-10 July 2014 Information fusion deals with the integration and merging of data and information from multiple (heterogeneous) sources. In many cases, the information that needs to be fused has security classification. The result of the fusion process is then by necessity restricted with the strictest information security classification of the inputs. This has severe drawbacks and limits the possible dissemination of the fusion results. It leads to decreased situational awareness: the organization knows information that would enable a better situation picture, but since parts of the information is restricted, it is not possible to distribute the most correct situational information. In this paper, we take steps towards defining fusion and data mining processes that can be used even when all the underlying data that was used cannot be disseminated. The method we propose here could be used to produce a classifier where all the sensitive information has been removed and where it can be shown that an antagonist cannot even in principle obtain knowledge about the classified information by using the classifier or situation picture.

Keywords: data integration; data mining; merging; security of data; sensor fusion; data integration; data merging; data mining processes; information security classification; restricted information fusion; Databases; Fuses; Information filters; Security; Sensitivity; classification; data mining; privacy preserving data mining; secrecy preserving fusion  (ID#: 15-3815)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916020&isnumber=6915967

 

Craig, R.; Spyridopoulos, T.; Tryfonas, T.; May, J., "Soft Systems Methodology In Net-Centric Cyber Defence System Development," Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pp.672,677, 5-8 Oct. 2014. doi: 10.1109/SMC.2014.6973986 Complexity is ever increasing within our information environment and organisations, as interdependent dynamic relationships within sociotechnical systems result in high variety and uncertainty from a lack of information or control. A net-centric approach is a strategy to improve information value, to enable stakeholders to extend their reach to additional data sources, share Situational Awareness (SA), synchronise effort and optimise resource use to deliver maximum (or proportionate) effect in support of goals. This paper takes a systems perspective to understand the dynamics within a net-centric information system. This paper presents the first stages of the Soft Systems Methodology (SSM), to develop a conceptual model of the human activity system and develop a system dynamics model to represent system behaviour, that will inform future research into a net-centric approach with information security. Our model supports the net-centric hypothesis that participation within a information sharing community extends information reach, improves organisation SA allowing proactive action to mitigate vulnerabilities and reduce overall risk within the community. The system dynamics model provides organisations with tools to better understand the value of a net-centric approach, a framework to determine their own maturity and evaluate strategic relationships with collaborative communities.

Keywords: information systems; security of data; SA; SSM; collaborative communities; complexity; data sources; human activity system; information environment; information reach; information security; information sharing community; information value; interdependent dynamic relationships; net-centric approach; net-centric cyber defence system development; net-centric hypothesis; net-centric information system; situational awareness; sociotechnical systems; soft systems methodology; system behaviour; system dynamics model; Collaboration; Communities; Information security; Modeling; Command and Control; Distributed Information Systems; Net-Centric; Situational Awareness; System Dynamics  (ID#: 15-3816)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6973986&isnumber=6973862

 

Major, S.; Fekovic, E., "Securing Intelligent Substations: Real-Time Situational Awareness," Energy Conference (ENERGYCON), 2014 IEEE International, pp.711,715, 13-16 May 2014. doi: 10.1109/ENERGYCON.2014.6850504 A system implementing real-time situational awareness through discovery, prevention, detection, response, audit, and management capabilities is seen as central to facilitating the protection of critical infrastructure systems. The effectiveness of providing such awareness technologies for electrical distribution companies is being evaluated in a series of field trials: (i) Substation Intrusion Detection / Prevention System (IDPS) and (ii) Security Information and Event Management (SIEM) System. These trials will help create a realistic case study on the effectiveness of such technologies with the view of forming a framework for critical infrastructure cyber security defense systems of the future.

Keywords: power engineering computing; security of data; substation automation; IDPS; SIEM system; critical infrastructure cyber security defense system; critical infrastructure system; electrical distribution companies; intelligent substation; real-time situational awareness; security information and event management system; substation intrusion detection-prevention system; Computer security; Monitoring; Protocols; Real-time systems; Substations; Critical Infrastructure; Cyber Security;DNP3;IDPS;IDS;IEC61850;IPS; SIEM  (ID#: 15-3817)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850504&isnumber=6850389

 

Kaci, A.; Kamwa, I.; Dessaint, L.A.; Guillon, S., "Synchrophasor Data Baselining and Mining for Online Monitoring of Dynamic Security Limits," Power Systems, IEEE Transactions on, vol.29, no .6, pp. 2681, 2695, Nov. 2014. doi: 10.1109/TPWRS.2014.2312418 When the system is in normal state, actual SCADA measurements of power transfers across critical interfaces are continuously compared with limits determined offline and stored in look-up tables or nomograms in order to assess whether the network is secure or insecure and inform the dispatcher to take preventive action in the latter case. However, synchrophasors could change this paradigm by enabling new features, the phase-angle differences, which are well-known measures of system stress, with the added potential to increase system visibility. The paper develops a systematic approach to baseline the phase-angles versus actual transfer limits across system interfaces and enable synchrophasor-based situational awareness (SBSA). Statistical methods are first used to determine seasonal exceedance levels of angle shifts that can allow real-time scoring and detection of atypical conditions. Next, key buses suitable for SBSA are identified using correlation and partitioning around medoid (PAM) clustering. It is shown that angle shifts of this subset of 15% of the network backbone buses can be effectively used as features in ensemble decision tree-based forecasting of seasonal security margins across critical interfaces.

Keywords: SCADA systems; data mining; pattern clustering; phasor measurement; power engineering computing; power system security; table lookup; PAM clustering; SBSA; SCADA measurements; angle shifts; critical interfaces; dynamic security limits; look-up tables; medoid clustering; network backbone buses; nomograms ;online monitoring; phase-angle differences; power transfer measurement; seasonal security margins; synchrophasor data baselining; synchrophasor-based situational awareness; system interfaces; system stress; system visibility; Data mining; Monitoring; Phasor measurement units; Power system reliability; Power system stability; Security; Stability criteria; Baselining; clustering; data mining; dynamic security assessment (DSA);partitioning around medoids (PAM); phasor measurement unit (PMU);random forest (RF);security monitoring; synchrophasor; system reliability  (ID#: 15-3818)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782395&isnumber=6926883

 

Hussain, A.; Faber, T.; Braden, R.; Benzel, T.; Yardley, T.; Jones, J.; Nicol, D.M.; Sanders, W.H.; Edgar, T.W.; Carroll, T.E.; Manz, D.O.; Tinnel, L., "Enabling Collaborative Research for Security and Resiliency of Energy Cyber Physical Systems," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.358,360, 26-28 May 2014. doi: 10.1109/DCOSS.2014.36 The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.

Keywords: fault tolerant computing; power engineering computing; power grids; security of data; collaborative research; cyber physical system resiliency; cyber physical system security; energy cyber physical systems; power grid; wide-area situational awareness experiment; Collaboration; Communities; Computer security; Data models; Phasor measurement units; Power systems; cyber physical systems; energy; experimentation  (ID#: 15-3819)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846190&isnumber=6846129

 

Prosser, B.; Dawes, N.; Fulp, E.W.; McKinnon, A.D.; Fink, G.A., "Using Set-Based Heading to Improve Mobile Agent Movement," Self-Adaptive and Self-Organizing Systems (SASO), 2014 IEEE Eighth International Conference on,  pp.120,128, 8-12 Sept. 2014. doi: 10.1109/SASO.2014.24 Cover time measures the time (or number of steps) required for a mobile agent to visit each node in a network (graph) at least once. A short cover time is important for search or foraging applications that require mobile agents to quickly inspect or monitor nodes in a network, such as providing situational awareness or security. Speed can be achieved if details about the graph are known or if the agent maintains a history of visited nodes, however, these requirements may not be feasible for agents with limited resources, they are difficult in dynamic graph topologies, and they do not easily scale to large networks. This paper introduces a set-based form of heading (directional bias) that allows an agent to more efficiently explore any connected graph, static or dynamic. When deciding the next node to visit, agents are discouraged from visiting nodes that neighbor both their previous and current locations. Modifying a traditional movement method, e.g., random walk, with this concept encourages an agent to move toward nodes that are less likely to have been previously visited, reducing cover time. Simulation results with grid, scale-free, and minimum distance graphs demonstrate heading can consistently reduce cover time as compared to non-heading movement techniques.

Keywords: mobile agents; network theory (graphs);random processes; security of data; cover time; dynamic graph topology; foraging application; minimum distance graph; mobile agent movement; movement method; network (graph); nonheading movement technique; random walk; scale-free graph ;set-based heading; situational awareness; situational security; Electronic mail; Geography; History; Mobile agents; Security; Time measurement; Topology; cover time; heading; mobile agents; random walk  (ID#: 15-3820)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001007&isnumber=7000942


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Situational Awareness and Security - Part 2

 

 
SoS Newsletter Logo

Situational Awareness & Security
Part 2

 

Situational awareness is an important human factor for cyber security. The works cited here cover specific problems.  In April 2014, IEEE published a Special Issue on Signal Processing for Situational Awareness from Networked Sensors and Social Media.  That material is available at: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6757015&punumber=78    The publications cited here are from other sources.  

 

Toshiro Yano, E.; Bhatt, P.; Gustavsson, P.M.; Ahlfeldt, R.-M., "Towards a Methodology for Cybersecurity Risk Management Using Agents Paradigm," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.325,325, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.70 In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].

Keywords: business continuity; risk management; security of data; SA; agents paradigm; business continuity ;cybersecurity risk management; proactive behavior; security management systems; sensors; situational awareness; Analytical models; Computer security; Educational institutions; Informatics; Risk management; Agent Based Paradigm; Cybersecurity Risk Management; Situational Awareness  (ID#: 15-3821)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975608&isnumber=6975536

 

Dressler, J.; Bowen, C.L.; Moody, W.; Koepke, J., "Operational Data Classes For Establishing Situational Awareness In Cyberspace," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, pp.175, 186, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916402 The United States, including the Department of Defense, relies heavily on information systems and networking technologies to efficiently conduct a wide variety of missions across the globe. With the ever-increasing rate of cyber attacks, this dependency places the nation at risk of a loss of confidentiality, integrity, and availability of its critical information resources; degrading its ability to complete the mission. In this paper, we introduce the operational data classes for establishing situational awareness in cyberspace. A system effectively using our key information components will be able to provide the nation's leadership timely and accurate information to gain an understanding of the operational cyber environment to enable strategic, operational, and tactical decision-making. In doing so, we present, define and provide examples of our key classes of operational data for cyber situational awareness and present a hypothetical case study demonstrating how they must be consolidated to provide a clear and relevant picture to a commander. In addition, current organizational and technical challenges are discussed, and areas for future research are addressed.

Keywords: decision making; defence industry; information systems; military computing; security of data; Department of Defense; United States;cyber attacks; cyber situational awareness; cyberspace; information systems; networking technologies; operational cyber environment; operational data classes; operational decision-making; strategic decision-making; tactical decision-making; Cyberspace; Decision making; Educational institutions; Intrusion detection; Real-time systems; US Department of Defense; cyber situational awareness; cyberspace operations; operational needs  (ID#: 15-3822)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916402&isnumber=6916383

 

Zonouz, S.; Davis, C.M.; Davis, K.R.; Berthier, R.; Bobba, R.B.; Sanders, W.H., "SOCCA: A Security-Oriented Cyber-Physical Contingency Analysis in Power Infrastructures," Smart Grid, IEEE Transactions on, vol.5, no.1, pp. 3,13, Jan. 2014. doi: 10.1109/TSG.2013.2280399 Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.

Keywords: power grids; power system planning; power system security; SOCCA; accidental contingency; control network; cyber components; cyber network configuration; cyber perspective; cyber-physical security evaluation; grid operation; malicious compromises; physical components; power infrastructures; power system contingency; power system topology; security-oriented cyber-physical contingency analysis; Algorithm design and analysis; Indexes; Mathematical model; Network topology; Power grids; Security; Contingency analysis; cyber-physical systems; security; situational awareness; state estimation  (ID#: 15-3823)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687271&isnumber=6693741

 

Boleng, J.; Novakouski, M.; Cahill, G.; Simanta, S.; Morris, E., "Fusing Open Source Intelligence and Handheld Situational Awareness: Benghazi Case Study," Military Communications Conference (MILCOM), 2014 IEEE, pp.1421, 1426, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.158 This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor Keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.

Keywords: history; national security; social networking (online); ubiquitous computing; Benghazi case study ;Libya; OSINT information; Twitter data; US Diplomatic mission; context aware handheld situational awareness framework; context computing; contingency operations; edge operators; events attack; historical analysis; information sharing; open source intelligence information; prototype capabilities; social media streaming; table top exercise; Command and control systems; Context; Media; Personnel; Prototypes; Real-time systems; Twitter; context computing; open source intelligence; real time processing; situational awareness; social media  (ID#: 15-3824)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956956&isnumber=6956719

 

Kuntz, K.; Smith, M.; Wedeward, K.; Collins, M., "Detecting, Locating, & Quantifying False Data Injections Utilizing Grid Topology Through Optimized D-FACTS Device Placement," North American Power Symposium (NAPS), 2014, pp.1, 6, 7-9 Sept. 2014. doi: 10.1109/NAPS.2014.6965352 Power grids are monitored by gathering data through remote sensors and estimating the state of the grid. Bad data detection schemes detect and remove poor data. False data is a special type of data injection designed to evade typical bad data detection schemes and compromise state estimates, possibly leading to improper control of the grid. Topology perturbation is a situational awareness method that implements the use of distributed flexible AC transmission system devices to alter impedance on optimally chosen lines, updating the grid topology and exposing the presence of false data. The success of the topology perturbation for improving grid control and exposing false data in AC state estimation is demonstrated. A technique is developed for identifying the false data injection attack vector and quantifying the compromised measurements. The proposed method provides successful false data detection and identification in IEEE 14, 24, and 39-bus test systems using AC state estimation.

Keywords: flexible AC transmission systems; power grids; power system state estimation; AC state estimation; bad data detection scheme; distributed flexible AC transmission system devices; false data injection attack vector; grid topology; optimized D-FACTS device placement; power grids; situational awareness method; topology perturbation; Jacobian matrices; Perturbation methods; Power grids; State estimation; Topology; Transmission line measurements; Vectors; Distributed Flexible AC Transmission Systems; Power Grids; Power System Security; Voltage Control  (ID#: 15-3825)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6965352&isnumber=6965351

 

Linda, O.; Wijayasekara, D.; Manic, M.; McQueen, M., "Optimal Placement of Phasor Measurement Units in Power Grids using Memetic Algorithms," Industrial Electronics (ISIE), 2014 IEEE 23rd International Symposium on, pp. 2035, 2041, 1-4 June 2014. doi: 10.1109/ISIE.2014.6864930 Wide area monitoring, protection and control for power network systems are one of the fundamental components of the smart grid concept. Synchronized measurement technology such as the Phasor Measurement Units (PMUs) will play a major role in implementing these components and they have the potential to provide reliable and secure full system observability. The problem of Optimal Placement of PMUs (OPP) consists of locating a minimal set of power buses where the PMUs must be placed in order to provide full system observability. In this paper a novel solution to the OPP problem using a Memetic Algorithm (MA) is proposed. The implemented MA combines the global optimization power of genetic algorithms with local solution tuning using the hill-climbing method. The performance of the proposed approach was demonstrated on IEEE benchmark power networks as well as on a segment of the Idaho region power network. It was shown that the proposed solution using a MA features significantly faster convergence rate towards the optimum solution.

Keywords: distribution networks; genetic algorithms; phasor measurement; power system control; power system protection; power system reliability; power system security; smart power grids; IEEE benchmark power networks; Idaho region power network; OPP problem; genetic algorithms; hill-climbing method; memetic algorithms; phasor measurement units; power buses; power grids; power network systems control; power network systems protection; smart grid; synchronized measurement technology; wide area monitoring; Genetic algorithms; Memetics; Observability; Phasor measurement units; Power grids; Sociology; Statistics; Memetic Algorithm; Optimal PMU Placement; Phasor Measurement Units; Power Grid; Situational Awareness  (ID#: 15-3826)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6864930&isnumber=6864573

 

Falcon, Rafael; Abielmona, Rami; Billings, Sean; Plachkov, Alex; Abbass, Hussein, "Risk Management With Hard-Soft Data Fusion In Maritime Domain Awareness," Computational Intelligence for Security and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on, pp. 1, 8, 14-17 Dec. 2014. doi: 10.1109/CISDA.2014.7035641 Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.

Keywords: Data mining; Feature extraction; Feeds; Hidden Markov models; Marine vehicles; Measurement; Risk management  (ID#: 15-3827)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035641&isnumber=7035614

 

Chenine, M.; Ullberg, J.; Nordstrom, L.; Wu, Y.; Ericsson, G.N., "A Framework for Wide-Area Monitoring and Control Systems Interoperability and Cybersecurity Analysis," Power Delivery, IEEE Transactions on, vol. 29, no. 2, pp.633,641, April 2014. doi: 10.1109/TPWRD.2013.2279182 Wide-area monitoring and control (WAMC) systems are the next-generation operational-management systems for electric power systems. The main purpose of such systems is to provide high resolution real-time situational awareness in order to improve the operation of the power system by detecting and responding to fast evolving phenomenon in power systems. From an information and communication technology (ICT) perspective, the nonfunctional qualities of these systems are increasingly becoming important and there is a need to evaluate and analyze the factors that impact these nonfunctional qualities. Enterprise architecture methods, which capture properties of ICT systems in architecture models and use these models as a basis for analysis and decision making, are a promising approach to meet these challenges. This paper presents a quantitative architecture analysis method for the study of WAMC ICT architectures focusing primarily on the interoperability and cybersecurity aspects.

Keywords: SCADA systems; decision making; open systems; power system management; power system measurement; power system security; WAMC ICT architecture; cybersecurity analysis; decision making; electric power system; enterprise architecture method; information and communication technology; next generation operational management system; nonfunctional quality; real time situational awareness; wide area monitoring and control systems interoperability; Analytical models; Computer security; Interoperability; Network interfaces; Power systems; Protocols; Unified modeling language; Communication systems; cybersecurity; enterprise architecture analysis; interoperability; wide-area monitoring and control systems (WAMCS) (ID#: 15-3828)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702498&isnumber=6776443

 

Kaci, A.; Kamwa, I.; Dessaint, L.-A.; Guillon, S., "Phase Angles as Predictors of Network Dynamic Security Limits and Further Implications," PES General Meeting | Conference & Exposition, 2014 IEEE, pp.1, 6, 27-31 July 2014. doi: 10.1109/PESGM.2014.6939281 In the United States, the number of Phasor Measurement Units (PMU) will increase from 166 networked devices in 2010 to 1043 in 2014. According to the Department of Energy, they are being installed in order to “evaluate and visualize reliability margin (which describes how close the system is to the edge of its stability boundary).” However, there is still a lot of debate in academia and industry around the usefulness of phase angles as unambiguous predictors of dynamic stability. In this paper, using 4-year of actual data from Hydro-Québec EMS, it is shown that phase angles enable satisfactory predictions of power transfer and dynamic security margins across critical interface using random forest models, with both explanation level and R-squares accuracy exceeding 99%. A generalized linear model (GLM) is next implemented to predict phase angles from day-ahead to hour-ahead time frames, using historical phase angles values and load forecast. Combining GLM based angles forecast with random forest mapping of phase angles to power transfers result in a new data-driven approach for dynamic security monitoring.

Keywords: energy management systems; load forecasting; phasor measurement; random processes; GLM;P MU; R-squares accuracy; dynamic security margins; dynamic security monitoring; generalized linear model; historical phase angles values; load forecast; network dynamic security limits; phasor measurement units; power transfer; random forest mapping; random forest models; Monitoring; Phasor measurement units; Power system stability; Predictive models; Radio frequency; Security; Stability analysis; Data mining; Dynamic Security Assessment (DSA); Dynamic Security Monitoring; Phasor measurement unit (PMU);Random Forest (RF); Synchrophasor; System reliability; Wide-Area Situational Awareness (WASA) (ID#: 15-3829)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6939281&isnumber=6938773

 

Cam, H.; Mouallem, P.; Yilin Mo; Sinopoli, B.; Nkrumah, B., "Modeling Impact Of Attacks, Recovery, And Attackability Conditions For Situational Awareness," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on, pp.181,187, 3-6 March 2014. doi: 10.1109/CogSIMA.2014.6816560 A distributed cyber control system comprises various types of assets, including sensors, intrusion detection systems, scanners, controllers, and actuators. The modeling and analysis of these components usually require multi-disciplinary approaches. This paper presents a modeling and dynamic analysis of a distributed cyber control system for situational awareness by taking advantage of control theory and time Petri net. Linear time-invariant systems are used to model the target system, attacks, assets influences, and an anomaly-based intrusion detection system. Time Petri nets are used to model the impact and timing relationships of attacks, vulnerability, and recovery at every node. To characterize those distributed control systems that are perfectly attackable, algebraic and topological attackability conditions are derived. Numerical evaluation is performed to determine the impact of attacks on distributed control system.

Keywords: Petri nets; distributed processing; security of data; actuators; anomaly-based intrusion detection system; assets influence; control theory; controllers; distributed control system; distributed cyber control system; dynamic analysis; linear time-invariant system; modeling impact; numerical evaluation; scanners; situational awareness; time Petri nets; timing relationships; topological attackability condition; Analytical models; Decentralized control; Fires; Intrusion detection; Linear systems; Sensors  (ID#: 15-3830)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816560&isnumber=6816529

 

Kornmaier, A.; Jaouen, F., "Beyond Technical Data - A More Comprehensive Situational Awareness Fed By Available Intelligence Information," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, pp.139,154, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916400 Information on cyber incidents and threats are currently collected and processed with a strong technical focus. Threat and vulnerability information alone are not a solid base for effective, affordable or actionable security advice for decision makers. They need more than a small technical cut of a bigger situational picture to combat and not only to mitigate the cyber threat. We first give a short overview over the related work that can be found in the literature. We found that the approaches mostly analysed “what” has been done, instead of looking more generically beyond the technical aspects for the tactics, techniques and procedures to identify the “how” it was done, by whom and why. We examine then, what information categories and data already exist to answer the question for an adversary's capabilities and objectives. As traditional intelligence tries to serve a better understanding of adversaries' capabilities, actions, and intent, the same is feasible in the cyber space with cyber intelligence. Thus, we identify information sources in the military and civil environment, before we propose to link that traditional information with the technical data for a better situational picture. We give examples of information that can be collected from traditional intelligence for correlation with technical data. Thus, the same intelligence operational picture for the cyber sphere could be developed like the one that is traditionally fed from conventional intelligence disciplines. Finally we propose a way of including intelligence processing in cyber analysis. We finally outline requirements that are key for a successful exchange of information and intelligence between military/civil information providers.

Keywords: decision making; information resources; security of data; adversary capabilities; civil environment; civil information providers; cyber analysis; cyber incidents; cyber intelligence; cyber space; cyber threats; decision makers; information categories; Information sources; intelligence information; intelligence processing; military environment; military information providers; situational awareness; technical data; threat information; vulnerability information; Bibliographies; Charge coupled devices; Context; Decision making; Malware; Solids; cyber; cyber intelligence; information collection fusion; intelligence  (ID#: 15-3931)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916400&isnumber=6916383

 

Okathe, T.; Heydari, S.S.; Sood, V.; El-Khatib, K., "Unified Multi-Critical Infrastructure Communication Architecture," Communications (QBSC), 2014 27th Biennial Symposium on, pp.178,183, 1-4 June 2014. doi: 10.1109/QBSC.2014.6841209 Recent events have brought to light the increasingly intertwined nature of modern infrastructures. As a result much effort is being put towards protecting these vital infrastructures without which modern society suffers dire consequences. These infrastructures, due to their intricate nature, behave in complex ways. Improving their resilience and understanding their behavior requires a collaborative effort between the private sector that operates these infrastructures and the government sector that regulates them. This collaboration in the form of information sharing requires a new type of information network whose goal is in two parts to enable infrastructure operators share status information among interdependent infrastructure nodes and also allow for the sharing of vital information concerning threats and other contingencies in the form of alerts. A communication model that meets these requirements while maintaining flexibility and scalability is presented in this paper.

Keywords: computer network reliability; critical infrastructures; communication model; government sector; information network; information sharing; interdependent infrastructure nodes; private sector; unified multicritical infrastructure communication architecture; Data models; Information management; Monitoring; Quality of service; Security; Subscriptions; Critical Infrastructure; Information Sharing; Interdependency; Publish/Subscribe; Situational awareness  (ID#: 15-3832)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841209&isnumber=6841165

 

Coatsworth, M.; Tran, J.; Ferworn, A., "A Hybrid Lossless And Lossy Compression Scheme for streaming RGB-D data in Real Time," Safety, Security, and Rescue Robotics (SSRR), 2014 IEEE International Symposium on, pp. 1,6, 27-30 Oct. 2014. doi: 10.1109/SSRR.2014.7017650 Mobile and aerial robots used in urban search and rescue (USAR) operations have shown the potential for allowing us to explore, survey and assess collapsed structures effectively at a safe distance. RGB-D cameras, such as the Microsoft Kinect, allow us to capture 3D depth data in addition to RGB images, providing a significantly richer user experience than flat video, which may provide improved situational awareness for first responders. However, the richer data comes at a higher cost in terms of data throughput and computing power requirements. In this paper we consider the problem of live streaming RGB-D data over wired and wireless communication channels, using low-power, embedded computing equipment. When assessing a disaster environment, a range camera is typically mounted on a ground or aerial robot along with the onboard computer system. Ground robots can use both wireless radio and tethers for communications, whereas aerial robots can only use wireless communication. We propose a hybrid lossless and lossy streaming compression format designed specifically for RGB-D data and investigate the feasibility and usefulness of live-streaming this data in disaster situations.

Keywords: aerospace robotics; cameras; data compression; image colour analysis; rescue robots; robot vision; video streaming;3D depth data capture; Microsoft Kinect; RGB images; RGB-D cameras; RGB-D data streaming; USAR operations; aerial robots; computing power requirements; data throughput; disaster environment; live streaming; lossless compression scheme; lossy compression scheme; low-power embedded computing equipment; mobile robots; red-green-blue-depth data; tethers; urban search and rescue; wireless radio; Computers; Hardware; Image coding; Robots; Servers; Three-dimensional displays; Wireless communication;3D;USAR;compression;point cloud; response robot; streaming; video  (ID#: 15-3833)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7017650&isnumber=7017643

 

Sunny, S.; Pavithran, V.; Achuthan, K., "Synthesizing Perception Based On Analysis Of Cyber Attack Environments," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp.2027, 2030, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968639 Analysing cyber attack environments yield tremendous insight into adversary behavior, their strategy and capabilities. Designing cyber intensive games that promote offensive and defensive activities to capture or protect assets assist in the understanding of cyber situational awareness. There exists tangible metrics to characterizing games such as CTFs to resolve the intensity and aggression of a cyber attack. This paper synthesizes the characteristics of InCTF (India CTF) and provides an understanding of the types of vulnerabilities that have the potential to cause significant damage by trained hackers. The two metrics i.e. toxicity and effectiveness and its relation to the final performance of each team is detailed in this context.

Keywords: computer crime; computer games; social aspects of automation; InCTF characteristics; India CTF; adversary behavior; assets protection;cyber attack aggression; cyber attack environments; cyber attack intensity ;cyber intensive games; cyber situational awareness; defensive activities; hackers; offensive activities; perception synthesis; toxicity metrics; vulnerabilities types; Computer crime; Computer hacking ;Equations; Games; Measurement; Analytic Hierarchy Process; Cyber situational awareness; Framework; Hacking; Vulnerability  (ID#: 15-3834)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968639&isnumber=6968191

 

Vellaithurai, C.; Srivastava, A.; Zonouz, S.; Berthier, R., "CPINDEX: Cyber-Physical Vulnerability Assessment for Power-Grid Infrastructures," Smart Grid, IEEE Transactions on, vol. PP, no.99, pp.1, 1, 08 December 2014. doi: 10.1109/TSG.2014.2372315 To protect complex power-grid control networks, power operators need efficient security assessment techniques that take into account both cyber side and the power side of the cyber-physical critical infrastructures. In this paper, we present CPINDEX, a security-oriented stochastic risk management technique that calculates cyber-physical security indices to measure the security level of the underlying cyber-physical setting. CPINDEX installs appropriate cyber-side instrumentation probes on individual host systems to dynamically capture and profile low-level system activities such as interprocess communications among operating system assets. CPINDEX uses the generated logs along with the topological information about the power network configuration to build stochastic Bayesian network models of the whole cyber-physical infrastructure and update them dynamically based on the current state of the underlying power system. Finally, CPINDEX implements belief propagation algorithms on the created stochastic models combined with a novel graph-theoretic power system indexing algorithm to calculate the cyber-physical index, i.e., to measure the security-level of the system's current cyber-physical state. The results of our experiments with actual attacks against a real-world power control network shows that CPINDEX, within few seconds, can efficiently compute the numerical indices during the attack that indicate the progressing malicious attack correctly.

Keywords: Generators; Indexes; Power measurement; Security; Smart grids; Cyber-physical security metrics; cyber-physical systems; intrusion detection systems; situational awareness  (ID#: 15-3835)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979242&isnumber=5446437

 

Pirinen, R., "Studies of Integration Readiness Levels: Case Shared Maritime Situational Awareness System," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.212, 215, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.79 The research question of this study is: How Integration Readiness Level (IRL) metrics can be understood and realized in the domain of border control information systems. The study address to the IRL metrics and their definition, criteria, references, and questionnaires for validation of border control information systems in case of the shared maritime situational awareness system. The target of study is in improvements of ways for acceptance, operational validation, risk assessment, and development of sharing mechanisms and integration of information systems and border control information interactions and collaboration concepts in Finnish national and European border control domains.

Keywords: national security; risk analysis; surveillance; European border control domains; Finnish national border control domains; IRL metrics; border control information interactions; border control information systems; information system integration; integration readiness level metrics; operational validation; risk assessment; shared maritime situational awareness system; sharing mechanisms; Buildings; Context; Control systems; Information systems; Interviews; measurement; Systems engineering and theory; integration; integration readiness levels; maturity; pre-operational validation; situational awareness  (ID#: 15-3836)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975575&isnumber=6975536

 

Robertson, J., "Integrity of a Common Operating Picture In Military Situational Awareness," Information Security for South Africa (ISSA), 2014, pp. 1, 7, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950514

 The lack of qualification of a common operating picture (COP) directly impacts the situational awareness of military Command and Control (C2). Since a commander is reliant on situational awareness information in order to make decisions regarding military operations, the COP needs to be trustworthy and provide accurate information for the commander to base decisions on the resultant information. If the COP's integrity is questioned, there is no definite way of defining its integrity. This paper looks into the integrity of the COP and how it can impact situational awareness. It discusses a potential solution to this problem on which future research can be based.

Keywords: {command and control systems; decision making; military computing;C2;COP integrity; common operating picture integrity; decision making; military command and control; military operations; military situational awareness; situational awareness information; Cameras; Microwave integrated circuits; Weight measurement; Wireless communication; Command and Control; Common Operating Picture; Integrity; Situational Awareness  (ID#: 15-3837)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950514&isnumber=6950479

 

Fernandez Arguedas, V.; Pallotta, G.; Vespe, M., "Automatic Generation Of Geographical Networks For Maritime Traffic Surveillance," Information Fusion (FUSION), 2014 17th International Conference on, pp. 1, 8, 7-10 July 2014 In this paper, an algorithm is proposed to automatically produce hierarchical graph-based representations of maritime shipping lanes extrapolated from historical vessel positioning data. Each shipping lane is generated based on the detection of the vessel behavioural changes and represented in a compact synthetic route composed of the network nodes and route segments. The outcome of the knowledge discovery process is a geographical maritime network that can be used in Maritime Situational Awareness (MSA) applications such as track reconstruction from missing information, situation/destination prediction, and detection of anomalous behaviour. Experimental results are presented, testing the algorithm in a specific scenario of interest, the Dover Strait.

Keywords: geographic information systems; marine systems; surveillance; traffic; automatic generation; geographical maritime network; hierarchical graph based representations; historical vessel positioning data; knowledge discovery process; maritime shipping lanes; maritime traffic surveillance; network nodes; route segments; track reconstruction; Knowledge discovery; Ports (Computers);Security; Standards; Surveillance; Trajectory; Anomaly Detection; Maritime Knowledge Discovery; Maritime Surveillance; Maritime Traffic Networks; Trajectory Mining and Synthetic Trajectories  (ID#: 15-3838)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6915990&isnumber=6915967


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Text Analytics Techniques (2014 Year in Review)

 

 
SoS Newsletter Logo

Text Analytics Techniques
(2014 Year in Review)

 

Text analytics refers to linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for intelligence, exploratory data analysis, research, or investigation. The research cited here focuses on large volumes of text mined to identify insider threats, intrusions, and malware detection.  The works cited were published in 2014.  

 

Dey, L.; Mahajan, D.; Gupta, H., "Obtaining Technology Insights from Large and Heterogeneous Document Collections," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol. 1, no., pp. 102, 109, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.22 Keeping up with rapid advances in research in various fields of Engineering and Technology is a challenging task. Decision makers including academics, program managers, venture capital investors, industry leaders and funding agencies not only need to be abreast of latest developments but also be able to assess the effect of growth in certain areas on their core business. Though analyst agencies like Gartner, McKinsey etc. Provide such reports for some areas, thought leaders of all organisations still need to amass data from heterogeneous collections like research publications, analyst reports, patent applications, competitor information etc. To help them finalize their own strategies. Text mining and data analytics researchers have been looking at integrating statistics, text analytics and information visualization to aid the process of retrieval and analytics. In this paper, we present our work on automated topical analysis and insight generation from large heterogeneous text collections of publications and patents. While most of the earlier work in this area provides search-based platforms, ours is an integrated platform for search and analysis. We have presented several methods and techniques that help in analysis and better comprehension of search results. We have also presented methods for generating insights about emerging and popular trends in research along with contextual differences between academic research and patenting profiles. We also present novel techniques to present topic evolution that helps users understand how a particular area has evolved over time.

Keywords: data analysis; information retrieval; patents; text analysis; academic research; automated topical analysis; heterogeneous document collections; insight generation; large heterogeneous text collections; patenting profiles; publications; topic evolution; Context; Data mining; Data visualization; Hidden Markov models; Indexing; Market research; Patents; analyzing research trends; mining patent databases; mining publications (ID#:15-3757)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927531&isnumber=6927508

 

Heimerl, F.; Lohmann, S.; Lange, S.; Ertl, T., "Word Cloud Explorer: Text Analytics Based on Word Clouds," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp. 1833, 1842, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.231 Word clouds have emerged as a straightforward and visually appealing visualization method for text. They are used in various contexts as a means to provide an overview by distilling text down to those words that appear with highest frequency. Typically, this is done in a static way as pure text summarization. We think, however, that there is a larger potential to this simple yet powerful visualization paradigm in text analytics. In this work, we explore the usefulness of word clouds for general text analysis tasks. We developed a prototypical system called the Word Cloud Explorer that relies entirely on word clouds as a visualization method. It equips them with advanced natural language processing, sophisticated interaction techniques, and context information. We show how this approach can be effectively used to solve text analysis tasks and evaluate it in a qualitative user study.

Keywords: data visualisation; natural language processing; text analysis; context information; natural language processing; sophisticated interaction techniques; text analysis tasks; text analytics; text summarization; visualization method; visualization paradigm; word cloud explorer; word clouds; Context; Layout; Pragmatics; Tag clouds; Text analysis; User interfaces; Visualization; interaction; natural language processing; tag clouds; text analytics; visualization; word cloud explorer; word clouds (ID#:15-3758) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758829&isnumber=6758592

 

Mukkamala, R.R.; Hussain, A.; Vatrapu, R., "Towards a Set Theoretical Approach to Big Data Analytics," Big Data (BigData Congress), 2014 IEEE International Congress on, pp. 629, 636, June 27 2014-July 2 2014. doi: 10.1109/BigData.Congress.2014.96 Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data analysis based on the conceptual and formal models. Fourth and last, based on the formal model and sentiment analysis of text, we present a method for profiling of artifacts and actors and apply this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M.

Keywords: Big Data; data analysis; set theory; social networking (online);text analysis; Facebook; Facebook page; H&M; SODATO; conceptual model; fast fashion company; formal model; graph theoretical approach; relational sociology; set theoretical approach; social big data analytics; social data analytic tool; social network analysis; text sentiment analysis; Analytical models; Data models; Facebook; Mathematical model; Media; Tagging; Big Social Data; Computational Social Science; Data Science; Formal Methods; Social Data Analytics}   (ID#: 15-3759)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906838&isnumber=6906742

 

Koyanagi, T.; Shinjo, Y., "A Fast And Compact Hybrid Memory Resident Datastore For Text Analytics With Autonomic Memory Allocation," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp.1,7, 1-3 April 2014. doi: 10.1109/IACS.2014.6841955 This paper describes a high-performance and space-efficient memory-resident datastore for text analytics systems based on a hash table for fast access, a dynamic trie for staging and a list of Level-Order Unary Degree Sequence (LOUDS) tries for compactness. We achieve efficient memory allocation and data placement by placing freqently access keys in the hash table, and infrequently accessed keys in the LOUDS tries without using conventional cache algorithms. Our algorithm also dynamically changes memory allocation sizes for these data structures according to the remaining available memory size. This technique yields 38.6% to 52.9% better throughput than a double array trie - a conventional fast and compact datastore.

Keywords: storage management; text analysis; tree data structures; LOUDS tries; autonomic memory allocation; data placement; data structures; double array trie; dynamic trie; hash table; high-performance memory-resident datastore; hybrid memory resident datastore; level-order unary degree sequence tries; space-efficient memory-resident datastore; text analytics; Buffer storage; Cows; SDRAM; Switches  (ID#: 15-3760)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841955&isnumber=6841931

 

Craig, P.; Roa Seiler, N.; Olvera Cervantes, A.D., "Animated Geo-temporal Clusters for Exploratory Search in Event Data Document Collections," Information Visualisation (IV), 2014 18th International Conference on,  pp.157,163, 16-18 July 2014. doi: 10.1109/IV.2014.69 This paper presents a novel visual analytics technique developed to support exploratory search tasks for event data document collections. The technique supports discovery and exploration by clustering results and overlaying cluster summaries onto coordinated timeline and map views. Users can also explore and interact with search results by selecting clusters to filter and re-cluster the data with animation used to smooth the transition between views. The technique demonstrates a number of advantages over alternative methods for displaying and exploring geo-referenced search results and spatio-temporal data. Firstly, cluster summaries can be presented in a manner that makes them easy to read and scan. Listing representative events from each cluster also helps the process of discovery by preserving the diversity of results. Also, clicking on visual representations of geo-temporal clusters provides a quick and intuitive way to navigate across space and time simultaneously. This removes the need to overload users with the display of too many event labels at any one time. The technique was evaluated with a group of nineteen users and compared with an equivalent text based exploratory search engine.

Keywords: computer animation; data visualisation; document handling; document image processing; information retrieval; pattern clustering; animated geo-temporal clusters; animation; coordinated timeline; equivalent text based exploratory search engine; event data document collections; geo-referenced search results; map views; spatio-temporal data; visual analytics technique; Data visualization; Electronic publishing; Encyclopedias; History; Internet; Navigation; human-computer information retrieval; information visualisation; visual analytics  (ID#: 15-3761)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6902897&isnumber=6902857

 

Eun Hee Ko; Klabjan, D., "Semantic Properties of Customer Sentiment in Tweets," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.657,663, 13-16 May 2014. doi: 10.1109/WAINA.2014.151 An increasing number of people are using online social networking services (SNSs), and a significant amount of information related to experiences in consumption is shared in this new media form. Text mining is an emerging technique for mining useful information from the web. We aim at discovering in particular tweets semantic patterns in consumers' discussions on social media. Specifically, the purposes of this study are twofold: 1) finding similarity and dissimilarity between two sets of textual documents that include consumers' sentiment polarities, two forms of positive vs. negative opinions and 2) driving actual content from the textual data that has a semantic trend. The considered tweets include consumers' opinions on US retail companies (e.g., Amazon, Walmart). Cosine similarity and K-means clustering methods are used to achieve the former goal, and Latent Dirichlet Allocation (LDA), a popular topic modeling algorithm, is used for the latter purpose. This is the first study which discovesr semantic properties of textual data in consumption context beyond sentiment analysis. In addition to major findings, we apply LDA (Latent Dirichlet Allocations) to the same data and drew latent topics that represent consumers' positive opinions and negative opinions on social media.

Keywords: consumer behaviour; data mining; pattern clustering; retail data processing; social networking (online);text analysis; K-means clustering methods; Twitter; US retail companies; consumer opinions; consumer sentiment polarities; cosine similarity; customer sentiment semantic properties; latent Dirichlet allocation; online social networking services; sentiment analysis; text mining; textual data semantic properties; textual documents; topic modeling algorithm; tweet semantic patterns; Business; Correlation; Data mining; Media; Semantics; Tagging; Vectors; text analytics; tweet analysis; document similarity; clustering; topic modeling; part-of-speech tagging  (ID#: 15-3762)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844713&isnumber=6844560

 

Conglei Shi; Yingcai Wu; Shixia Liu; Hong Zhou; Huamin Qu, "LoyalTracker: Visualizing Loyalty Dynamics in Search Engines," Visualization and Computer Graphics, IEEE Transactions on, vol.20, no.12, pp. 1733, 1742, Dec. 31 2014. doi: 10.1109/TVCG.2014.2346912 The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

Keywords: data analysis; data visualisation; human factors; search engines; text analysis; LoyalTracker; defection behavior; density map; flow metaphor; flow view; interactive visualization technique; loyalty dynamics visualization; search engine providers; switching behavior; user log data; user loyalty tracking; visual analytics system; word cloud; Behavioral science; Data visualization; Information analysis; Search engines; Search methods; Visual analytics; Time-series visualization; log data visualization; stacked graphs; text visualization  (ID#: 15-3763)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876038&isnumber=6935054

 

Babour, A.; Khan, J.I., "Tweet Sentiment Analytics with Context Sensitive Tone-Word Lexicon," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol. 1, pp.392,399, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.61 In this paper we propose a twitter sentiment analytics that mines for opinion polarity about a given topic. Most of current semantic sentiment analytics depends on polarity lexicons. However, many key tone words are frequently bipolar. In this paper we demonstrate a technique which can accommodate the bipolarity of tone words by context sensitive tone lexicon learning mechanism where the context is modeled by the semantic neighborhood of the main target. Performance analysis shows that ability to contextualize the tone word polarity significantly improves the accuracy.

Keywords: data mining; learning (artificial intelligence); natural language processing; social networking (online);text analysis; word processing; context sensitive tone lexicon learning mechanism; opinion polarity mining; tone word polarity; tweet sentiment analytics; twitter sentiment analytics; Accuracy; Cameras; Context; Dictionaries; Semantics; Sentiment analysis  (ID#: 15-3764)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927570&isnumber=6927508

 

Vantigodi, S.; Babu, R.V., "Entropy Constrained Exemplar-Based Image Inpainting," Signal Processing and Communications (SPCOM), 2014 International Conference on, pp.1,5, 22-25 July 2014. doi: 10.1109/SPCOM.2014.6984013 Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques.

Keywords: edge detection; entropy; image restoration; entropy constrained exemplar-based image inpainting; false edge propagation; old painting restoration; photograph restoration; structural similarity index; Entropy; Equations; Image color analysis; Image edge detection; Image reconstruction; Image restoration; PSNR  (ID#: 15-3765)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984013&isnumber=6983903

 

Baughman, A.K.; Chuang, W.; Dixon, K.R.; Benz, Z.; Basilico, J., "DeepQA Jeopardy! Gamification: A Machine-Learning Perspective," Computational Intelligence and AI in Games, IEEE Transactions on, vol.6, no. 1, pp. 55, 66, March 2014. doi: 10.1109/TCIAIG.2013.2285651 DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems.

Keywords: business data processing; computer games; learning (artificial intelligence); natural language processing; question answering (information retrieval);text analysis; DeepQA Jeopardy! gamification; NLP algorithms; NLP question-and-answer system; Newton-Raphson logistic regression; Watson gamification; Watson machine-learning pipelines; algorithm engineering; business domain; configuration settings; data engineering techniques; domain gamification; extreme class imbalance; ingested corpus sets; large-scale natural language processing question-and-answer system; machine-learning process; nongame play; positive-to-negative ratio; question-and-answer pairs; real-world business problems; regularization term; structured data; training instances; true-false labels; unstructured data; Accuracy; Games; Logistics; Machine learning algorithms; Pipelines; Training; Gamification; machine learning; natural language processing (NLP); pattern recognition  (ID#: 15-3766)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6632881&isnumber=6766678

 

Zadeh, B.Q.; Handschuh, S., "Random Manhattan Indexing," Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on, pp.203,208, 1-5 Sept. 2014. doi: 10.1109/DEXA.2014.51 Vector space models (VSMs) are mathematically well-defined frameworks that have been widely used in text processing. In these models, high-dimensional, often sparse vectors represent text units. In an application, the similarity of vectors -- and hence the text units that they represent -- is computed by a distance formula. The high dimensionality of vectors, however, is a barrier to the performance of methods that employ VSMs. Consequently, a dimensionality reduction technique is employed to alleviate this problem. This paper introduces a new method, called Random Manhattan Indexing (RMI), for the construction of L1 normed VSMs at reduced dimensionality. RMI combines the construction of a VSM and dimension reduction into an incremental, and thus scalable, procedure. In order to attain its goal, RMI employs the sparse Cauchy random projections.

Keywords: data reduction; indexing; text analysis; L1 normed VSM; RMI; dimensionality reduction technique; natural language text; random Manhattan indexing; sparse Cauchy random projections; vector space model; Computational modeling; Context; Equations; Indexing; Mathematical model; Vectors; Manhattan distance; dimensionality reduction; random projection; retrieval models; vector space model  (ID#: 15-3767)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974850&isnumber=6974758

 

Koch, S.; John, M.; Worner, M.; Muller, A.; Ertl, T., "VarifocalReader — In-Depth Visual Analysis of Large Text Documents," Visualization and Computer Graphics, IEEE Transactions on, vol. 20, no.12, pp. 1723, 1732, Dec. 31 2014. doi: 10.1109/TVCG.2014.2346677 Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.

Keywords: data visualisation; learning (artificial intelligence);text analysis; document analysis; focus-context techniques ;in-depth visual analysis; intermediate text levels; literary analysis; machine learning techniques; natural language processing; text documents ;text mining; varifocalreader; visual abstraction; Data mining; Data visualization; Document handling; Interactive systems; Natural language processing; Navigation; Tag clouds; Text mining; distant reading; document analysis; literary analysis; machine learning; natural language processing; text mining; visual analytics  (ID#: 15-3768)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875959&isnumber=6935054

 

Lomotey, R.K.; Deters, R., "Terms Mining in Document-Based NoSQL: Response to Unstructured Data," Big Data (BigData Congress), 2014 IEEE International Congress on, pp. 661, 668, June 27 2014-July 2 2014. doi: 10.1109/BigData.Congress.2014.99  Unstructured data mining has become topical recently due to the availability of high-dimensional and voluminous digital content (known as "Big Data") across the enterprise spectrum. The Relational Database Management Systems (RDBMS) have been employed over the past decades for content storage and management, but, the ever-growing heterogeneity in today's data calls for a new storage approach. Thus, the NoSQL database has emerged as the preferred storage facility nowadays since the facility supports unstructured data storage. This creates the need to explore efficient data mining techniques from such NoSQL systems since the available tools and frameworks which are designed for RDBMS are often not directly applicable. In this paper, we focused on topics and terms mining, based on clustering, in document-based NoSQL. This is achieved by adapting the architectural design of an analytics-as-a-service framework and the proposal of the Viterbi algorithm to enhance the accuracy of the terms classification in the system. The results from the pilot testing of our work show higher accuracy in comparison to some previously proposed techniques such as the parallel search.

Keywords: Big Data; data mining; database management systems; document handling; pattern classification; pattern clustering; text analysis; Big Data; NoSQL database; Viterbi algorithm; analytics-as-a-service framework; clustering; data mining techniques; document-based NoSQL; term classification; terms mining; topics mining; unstructured data storage; Big data; Classification algorithms; Data mining; Databases; Dictionaries; Semantics; Viterbi algorithm; Association Rules; Big Bata; NoSQL; Terms; Unstructured Data Mining; Viterbi algorithm; classification; clustering  (ID#: 15-3769)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906842&isnumber=6906742


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

SURE Meeting Presentations 2015 March 17-18

 
SoS Logo

SecUrity and REsilience

for

Cyber-Physical Systems (SURE) Presentations

SURE Meeting Presentations 2015 March 17-18

The research projects presented at the Six Month Review meeting between NSA and the four System Science of SecUrity and REsilience for Cyber-Physical Systems (SURE) project universities--Vanderbilt, Hawaii, California-Berkeley, and MIT-- covered behavioral and technical subjects related to resiliency and introduced the Resilient Cyber Physical Systems Testbed.  A summary of each presentation and a link to the original document are provided.

 

Project Overview;  Xenofon Koutsoukos (Vanderbilt) URL: http://cps-vo.org/node/18484 Project Thrustsare Hierarchical Coordination and Control; Risk analysis and incentive design that aim at developing regulations and strategies at the management level; Resilient monitoring and control of the networked control system infrastructure; Science of decentralized security which aims to develop a framework that will enable reasoning about the security of all the integrated constituent CPS components; Reliable and practical reasoning about secure computation and communication in networks which aims to contribute a formal framework for reasoning about security in CPS; Evaluation and experimentation using modeling and simulation integration of cyber and physical platforms that directly interface with human decision makers; and Education and outreach.

 

Evaluation Testbed; Peter Volgyesi and Himanshu Neema (Vanderbilt University) URL: http://cps-vo.org/node/18483  The objectives of  the RCPS Testbed  is to develop and maintain well-defined domains,  language, rules, tools, and metrics; integrate existing robust domain tools and technologies, simulators, analysis tools, middleware; maintain model libraries and repositories; Red Team vs Blue Team scenarios and challenges; simulate real adversary behavior; integration technology; meta-programmable tools; strong versioning; web-based interfaces;  and cloud-based, scalable services. Current research being conducted on the RCSP Testbed includes complex attack strategies, attack description language, using and orchestrating existing atomic action, adversarial risk analysis, repeated, automated simulation runs: probabilistic interdependency graphs, optimization, resilient monitoring and control and science of decentralized security

 

Demo: Resilient and Secure Component-Based Software for CPS Architectures; William Emfinger and Pranav Kumar (Vanderbilt University) URL: http://cps-vo.org/node/18517   The RCPS Testbed consists of  embedded system hardware with hosts running actual code,  a physical system simulator,, code running on the hosts communicates with the physics simulator to get current sensor state and to control the actuators, a smart network switch that  allows emulation of network resources to accurately emulate the system’s network, integrated analysis and measurement tools  and modeling tools, code generators, and deployment/monitoring utilities.  The demonstration showed many of these features using a simulated GPS satellite constellation.

 

Science of Adversarial Risk in CPS; Yevgeniy Vorobeychik (Vanderbilt University) URL:  http://cps-vo.org/node/18479 CPS security relies on many individual decision makers making good choices.  Risk stems from choices which are optimal for individuals, but not for the system as a whole, but in most real CPS security, the system involves multiple defenders, with each defender “charged” with security for

a subset of assets.   When security decisions are decentralized and decision makers have different interests, system-level security can be sub-optimal.  Next steps will be to use simulation as a “multi-defender” platform to form a bridge into the evaluation testbed and to develop automated methods for CPS model-based risk analysis in GME using the attack description language.

 

Incentive Mechanisms for CPS Security; Saurabh Amin (MIT) URL: http://cps-vo.org/node/18489  Incentive mechanisms are needed to encourage the building of secure systems. For certain regulatory regimes, electricity distributors make sub-optimal investment in monitoring; user steals less when  fines are higher or detection probability is higher.  Distributor invests more in monitoring when costs of monitoring lower or user stealing is higher. Due to information deficiencies, R and S are interdependent

Equilibrium depends on relative frequencies of failures and reliability failure distribution.  Defenders should co-design defenses against faults & attacks. Contributions of the work are a network game with interdependent reliability and security, full characterization of equilibria, and a polynomial-time algorithm for enumerating all equilibria.  Future work will be to study defender interactions with multiple strategic attackers, game parameters not known to all players, link capacities, and edge reinforcement.

 

Putting Humans in the Loop: Active Learning at Scale for Malware Detection; Anthony Joseph (UC Berkeley)  URL: http://cps-vo.org/node/18489  This study looks at use of  Machine Learning  to separate positive (malicious) from negative (benign) instances. Security Analytics: Using Robust ML for adversary resistant security, metrics and analytics,  Pattern mining and prediction, at scale, on big data, with adversaries;  Detecting and classifying malicious actions within Cyber-Physical Systems, malware, spam; Situational Awareness: Helping the humans-in-the-loop;  Real-time, Machine Learning-based analytics for human domain experts;  Interaction with multiple thrusts; Hierarchical Coordination and Control via a ML pipeline addressing CPS security needs for Resilient Monitoring and Control and Evaluation and experimentation using humans and real-world data (malware).

 

Modeling Privacy in Human CPS; Roy Dong (UC Berkeley) URL:  http://cps-vo.org/node/18486  From an engineering perspective, there are two dominant paradigms: control over information and secrecy.  The author proposes privacy contracts since privacy is a good: higher privacy settings could cost more.  There is asymmetric information in this problem, and adverse selection becomes an issue.

 

Secure Computation in Actor Networks; Dusko Pavlovic (U of Hawai’i) URL: http://cps-vo.org/node/18512  Security is both a suitable subject for science and the process of security is also similar to the process of science, since both science and security depend on the methods of inductive inference. A scientific theory can never be definitely proved, but can only be disproved by new evidence, and improved into a better theory. Because of the same dependency, every security claim and method has a lifetime, and always eventually needs to be improved.

 

Resilient Sensor Network Design for Flow Networks; Waseem Abbas (Vanderbilt University) URL: http://cps-vo.org/node/18515 leakages and faults in flow networks cause commercial and physical losses.  Using water supply information, they systematically examine early detection and localization mechanisms of reported and unreported breaks in an efficient way.  Resilience issues include uncertainty in system response to burst pipes, inherent model uncertainty,  transient system analysis, additional uncertainty in infrastructure topology and characteristics, underground infrastructure that is not visible and hard to access, and  the spatial distribution of the networks and complex looped topology due to constant expansion and rehabilitation.  This approach considers pipe burst events as opposed to previously majority of work considering water quality.  There is very limited work on localization as compared to detection, and issue for resiliency.

 

Attack-Resilient Observation Selection; Aron Laszka (Vanderbilt University)  URL: http://cps-vo.org/node/18478 To dynamically control any system, accurate information about its evolving state  systems to be monitored can extend over a vast area  resulting in many possible points of observation.  Focused on traffic patterns, this study posits that the resilience of monitoring to denial-of-service type attacks can be achieved by placing sensors in a resilient way.  Resilient sensor placement is formulated as a constrained optimization problem based on a formal prediction model that is applicable to multiple domains.  Previous work focused on observation selection while current work is addressing resilient observation selection.  Future work will address  unit costs of uncertainty for both the “no-attack” case and the “attacked” and selections minimizing the sum cost of both uncertainties.

 

Using Machine Learning to Improve the Resilience of Control; Claire Tomlin (UC Berkeley) URL: http://cps-vo.org/node/18482 Using data from air traffic control, the authors use machine learning as a tool to  visualize a model of resiliency.  They conclude research in the security of control systems has assumed a fixed control algorithm, and considered attack of the sensors, algorithm, machine learning adapts the control based on data collected.  In theory, the learning could be used to detect anomalies and intrusions. However, if an attacker knew the learning algorithm, it would be easier to spoof the system without detection

 

Resilient and Secure Component-Based Software for CPS Architectures; Gabor Karsai (Vanderbilt University) URL: http://cps-vo.org/node/18511 The ‘CPS Cloud’ is used as an open sensing/Computing/Actuation Platform where various customer applications can run side-by-side.  The physical world can be simulated in real-time with the desired degree of fidelity, including faults, the network can be emulated in real-time with a desired degree of fidelity, including cyber effects, and embedded computing platforms are very affordable. Some examples of potential CPS Cloud subjects include fractionated satellite –observation platforms, coordinated swarm of UAVs executing a mission, fleet of UUVs collecting data while in motion, and monitoring and control nodes on the Smart Grid Challenges in building this CPS Cloud include networked, distributed control systems, fault-and security resilience, and applications with different trust and security levels that must run side-by-side.

 

System-Level Co-design for CPS Security; Janos Sztipanovits (Vanderbilt University) ) URL: http://cps-vo.org/node/18525  The traditional system-level synthesis problem for the “cyber” side of CPS is to dDerive specification for the behavior of the  system components that will be implemented using networked computing, derive a functional model for the information architecture and componentize the system, select computing/networking platform, derive deployment model assigning components of the information architecture to processing and communication platforms, generate code for software components, and perform timing analysis in order to make security part of system-level  co-design processes.  Mitigation of security vulnerabilities cost performance, timing, and functionality.  Integration into design processes will reduce performance degradation.

 

Science of Security Virtual Organization; Katie Dey (Vanderbilt University) ) URL: http://cps-vo.org/node/18528   The Cyber Physical Systems Virtual Organization is a tool to develop community, collaborate, and support technology transfer and translational research.  The CPS-VO web page is the focal point for information sharing and community outreach and development. Nodes provide information about SURE activities, meetings, and research as well as general announcements about upcoming events, funding opportunities, discussion forums and chat, and a newsletter containing current research bibliographies about topics of interest to the Science of Security community.

(ID#:15-4087)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


SecUrity and Resilience (SURE) Review

 
SoS Logo

SecUrity and REsilience for Cyber-Physical Systems (SURE) Review

Nashville, TN

20 March 2015

On March 17 and 18, 2015, researchers from the four System Science of SecUrity and REsilience for Cyber-Physical Systems (SURE) project universities (Vanderbilt, Hawai’I, California-Berkeley, and MIT) met with members of NSA’s R2 Directorate to review their first six months of work.  SURE is the NSA-funded project aimed at improving scientific understanding of resiliency, that is, robustness to reliability failures or faults and survivability against security failures and attacks in cyber physical systems (CPS).  The project addresses the question of how to design systems that are resilient despite significant decentralization of resources and decision-making.

SURE Conference

Waseem Abbas and Xenophon Koutsoukos, Vanderbilt, listen to comments about resilient sensor designs from David Corman, National Science Foundation.

Initially looking at water distribution and surface traffic control architectures, air traffic control and satellite systems are added examples of the types of cyber physical systems being examined.  Xenofon Koutsoukos, Professor of Electrical Engineering and Computer Science in the Institute for Software Integrated Systems (ISIS) at Vanderbilt University, the Principle Investigator (PI) for SURE,  indicated the use of these additional cyber physical systems  is  to demonstrate how the SURE  methodologies can apply to multiple systems.  Main research thrusts include hierarchical coordination and control, science of decentralized security, reliable and practical reasoning about secure computation and communication, evaluation and experimentation, and education and outreach. The center piece is their testbed for evaluation of CPS security and resilience

The development of the Resilient Cyber Physical Systems (RCPS) Testbed supports evaluation and experimentation across the complete SURE research portfolio.  This platform is being used to capture the physical, computational and communication infrastructure; describe the deployment, configuration of security measures and algorithms; and provides entry points for injecting various attack or failure events.  "Red Team" vs "Blue Team" simulation scenarios are being developed. After the active design phase--when both teams are working in parallel and in isolation--the simulation is executed with no external user interaction, potentially several times. The winner is decided based on scoring weights and rules which are captured by the infrastructure model.  

   SURE Conference                     

The Resilient Cyber Physical System Testbed hardware component.

In addition to the testbed, ten research projects on resiliency were presented.  These presentations covered both behavioral and technical subjects including adversarial risk, active learning for malware detection, privacy modeling, actor networks, flow networks, control systems, software and software architecture, and information flow policies.  The CPS-VO web site, its scope and format was also briefed.  Details of these research presentations will be presented in a companion newsletter article.

In addition to Professor Koutsoukos, participants included his Vanderbilt colleagues Gabor Karsai, Janos Sztipanovits, Peter Volgyesi, Yevgeniy Vorobeychik and Katie Dey.  Other participants were Saurabh Amin, MIT; Dusko Pavlovic, U. of Hawaii; and Larry Rohrbough, Claire Tomlin, and Roy Dong from UC Berkeley.  Government representatives from the National Science Foundation, Nuclear Regulatory Commission, and Air Force Research Labs also attended, as well as the sponsoring agency, NSA.

SURE Conference

Vanderbilt graduate students Pranav Srinivas Kumar (L) and William Emfinger demonstrated the Resilient Cyber Physical Systems testbed.

(ID#:15-4086)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Upcoming Events of Interest

 
SoS Logo

Upcoming Events

Mark your calendars!

This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.


Intel International Science and Engineering Fair 2015
The Intel International Science and Engineering Fair (Intel ISEF), a program of Society for Science & the Public (SSP), is the world’s largest international pre-college science competition. More than 1,700 high school students from over 70 countries, regions, and territories are awarded the opportunity to showcase their independent research and compete for more than $5 million in prizes.
Date: May 10 – 15
Location: Pittsburgh, Pa
URL: https://student.societyforscience.org/intel-international-science-and-engineering-fair-2015

AFCEA Spring Intelligence Symposium
The Symposium will be a one-of-a-kind event designed to set the tone and agenda for billions of dollars in IC investment.  Leaders from all major IC agencies, from the ODNI, IARPA, and the National Intelligence Council will explore where that investment is being directed and how industry, Federally Funded R&D Centers, and academia can best contribute to the IC's R&D effort.
Date: May 20 – 21
Location: Springfield, Va
URL: http://www.afcea.org/events/springintel/15/index.asp

eCrime 2015 10th Symposium on Electronic Crime Research
Crime 2015 consists of 4 days of keynote presentations, technical and practical sessions and interactive panels, which will allow academic researchers, security practitioners, and law enforcement to discuss and exchange ideas, experiences and lessons learnt in all aspects of electronic crime and ways to combat it.
Date: May 26 – 29
Location: Barcelona, Spain
URL: https://apwg.org/apwg-events/ecrime2015/cfp

ACNS 2015 13th International Conference on Applied Cryptography and Network Security
The annual ACNS conference focuses on innovative results in applied cryptography and network and computer security. Both academic research works as well as developments in industrial and technical frontiers fall within the scope of the conference.
Date: June 2 – 5
Location: New York, NY
URL: http://acns2015.cs.columbia.edu/

DAC-Security Track 2015 Design Automation Conference
The Security Track at DAC seeks to highlight and celebrate the emergence of security and trust as an important dimension of Hardware and Embedded Systems Design (side-by-side with power, performance, and reliability).
Date: June 7 – 11
Location: San Francisco, Ca
URL: https://dac.com/submission-categories/hardware-and-software-security

International Conference on Mobile, Secure and Programmable Networking (MSPN'2015)
The International Conference on Mobile, Secure and Programmable Networking aims at providing a top forum for researchers and practitioners to present and discuss new trends in networking infrastructures, security, services and applications while focusing on virtualization and Cloud computing for networks, network programming, Software Defined Networks (SDN) and their security.
Date: June 15 – 17
Location: Paris, France
URL: http://cedric.cnam.fr/workshops/mspn2015/index.html

Defensive Cyber Operations Symposium
The goal is to improve security, but a successful strategy depends on a matrix of participating organizations adapting technical solutions and adopting enterprise management to improve efficiency, security and reliability.
Date: June 16 – 18
Location: Baltimore, Md
URL: http://events.jspargo.com/AFCEAcyberops15/public/enter.aspx

WiSec 2015 8th ACM Conference on Security and Privacy in Wireless and Mobile Networks
The focus of the ACM Conference on Security and Privacy in Wireless and Mobile Networks (ACM WiSec) is on the security and privacy aspects of wireless communications, mobile networks, and their applications.
Date: June 22 – 26
Location: New York, NY
URL: http://www.sigsac.org/wisec/WiSec2015/

RFIDSec 2015 11th Workshop on RFID Security
RFIDsec is the earliest workshop devoted to security and privacy in Radio Frequency Identification (RFID). Starting in 2005, RFIDsec is today the reference workshop in the RFID field with participants from all over the world.
Date: June 23 – 24
Location: New York, NY
URL: http://rfidsec2015.iaik.tugraz.at/

NSA Information Assurance Directorate (IAD)'s Information Assurance Symposium (IAS)
The NSA Information Assurance Directorate (IAD)'s Information Assurance Symposium (IAS) is a biannual forum hosted by the National Security Agency (NSA). IAS events of the past have proven to be the preferred Information Assurance event of the year.
Date: June 29 – July 1
Location: Washington D.C.
URL: https://www.fbcinc.com/e/ias/

HAISA 2015 International Symposium on Human Aspects of Information Security & Assurance
This symposium, the ninth in our series, will bring together leading figures from academia and industry to present and discuss the latest advances in information security from research and commercial perspectives.
Date: July 1 – 3
Location: Lesvos, Greece
URL: http://haisa.org/

DIMVA 2015 International Conference on Detection of Intrusions and Malware & Vulnerability Assessment
The annual DIMVA conference serves as a premier forum for advancing the state of the art in intrusion detection, malware detection, and vulnerability assessment.
Date: July 9 – 10
Location: Milano, Italy
URL: http://www.dimva2015.it/

24th USENIX Security Symposium
The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks
Date: August 12 – 14
Location: Washington D.C.
URL: https://www.usenix.org/conference/usenixsecurity15

Global Identity Summit
The Global Identity Summit focuses on identity management solutions for the corporate, defense and homeland security communities.
Date: September 21 – 24
Location: Tampa, Fl
URL: http://events.jspargo.com/id15/Public/Enter.aspx


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.